00:00:00.000 Started by upstream project "autotest-per-patch" build number 132783 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.076 Using shallow fetch with depth 1 00:00:00.076 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.076 > git --version # timeout=10 00:00:00.114 > git --version # 'git version 2.39.2' 00:00:00.114 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.130 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.130 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.862 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.874 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.887 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.887 > git config core.sparsecheckout # timeout=10 00:00:04.899 > git read-tree -mu HEAD # timeout=10 00:00:04.915 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.938 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.938 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.021 [Pipeline] Start of Pipeline 00:00:05.036 [Pipeline] library 00:00:05.038 Loading library shm_lib@master 00:00:05.038 Library shm_lib@master is cached. Copying from home. 00:00:05.053 [Pipeline] node 00:00:05.081 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.084 [Pipeline] { 00:00:05.096 [Pipeline] catchError 00:00:05.097 [Pipeline] { 00:00:05.113 [Pipeline] wrap 00:00:05.122 [Pipeline] { 00:00:05.128 [Pipeline] stage 00:00:05.129 [Pipeline] { (Prologue) 00:00:05.339 [Pipeline] sh 00:00:05.629 + logger -p user.info -t JENKINS-CI 00:00:05.645 [Pipeline] echo 00:00:05.646 Node: GP8 00:00:05.653 [Pipeline] sh 00:00:05.951 [Pipeline] setCustomBuildProperty 00:00:05.962 [Pipeline] echo 00:00:05.964 Cleanup processes 00:00:05.969 [Pipeline] sh 00:00:06.256 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.256 1874195 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.292 [Pipeline] sh 00:00:06.594 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.594 ++ grep -v 'sudo pgrep' 00:00:06.594 ++ awk '{print $1}' 00:00:06.594 + sudo kill -9 00:00:06.594 + true 00:00:06.607 [Pipeline] cleanWs 00:00:06.617 [WS-CLEANUP] Deleting project workspace... 00:00:06.617 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.623 [WS-CLEANUP] done 00:00:06.628 [Pipeline] setCustomBuildProperty 00:00:06.644 [Pipeline] sh 00:00:06.925 + sudo git config --global --replace-all safe.directory '*' 00:00:07.017 [Pipeline] httpRequest 00:00:09.879 [Pipeline] echo 00:00:09.880 Sorcerer 10.211.164.101 is alive 00:00:09.889 [Pipeline] retry 00:00:09.891 [Pipeline] { 00:00:09.905 [Pipeline] httpRequest 00:00:09.910 HttpMethod: GET 00:00:09.910 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.910 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.938 Response Code: HTTP/1.1 200 OK 00:00:09.938 Success: Status code 200 is in the accepted range: 200,404 00:00:09.939 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.094 [Pipeline] } 00:00:24.109 [Pipeline] // retry 00:00:24.117 [Pipeline] sh 00:00:24.420 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.696 [Pipeline] httpRequest 00:00:25.606 [Pipeline] echo 00:00:25.607 Sorcerer 10.211.164.101 is alive 00:00:25.614 [Pipeline] retry 00:00:25.615 [Pipeline] { 00:00:25.625 [Pipeline] httpRequest 00:00:25.629 HttpMethod: GET 00:00:25.629 URL: http://10.211.164.101/packages/spdk_b7d7c4b248996a6cfdc94ed4a7d5400f72c00fc8.tar.gz 00:00:25.632 Sending request to url: http://10.211.164.101/packages/spdk_b7d7c4b248996a6cfdc94ed4a7d5400f72c00fc8.tar.gz 00:00:25.644 Response Code: HTTP/1.1 200 OK 00:00:25.644 Success: Status code 200 is in the accepted range: 200,404 00:00:25.644 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b7d7c4b248996a6cfdc94ed4a7d5400f72c00fc8.tar.gz 00:05:01.943 [Pipeline] } 00:05:01.960 [Pipeline] // retry 00:05:01.967 [Pipeline] sh 00:05:02.256 + tar --no-same-owner -xf spdk_b7d7c4b248996a6cfdc94ed4a7d5400f72c00fc8.tar.gz 00:05:06.452 [Pipeline] sh 00:05:06.752 + git -C spdk log --oneline -n5 00:05:06.752 b7d7c4b24 env: handle possible DPDK errors in mem_map_init 00:05:06.752 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:05:06.752 496bfd677 env: match legacy mem mode config with DPDK 00:05:06.752 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:05:06.752 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:05:06.762 [Pipeline] } 00:05:06.774 [Pipeline] // stage 00:05:06.781 [Pipeline] stage 00:05:06.783 [Pipeline] { (Prepare) 00:05:06.798 [Pipeline] writeFile 00:05:06.812 [Pipeline] sh 00:05:07.096 + logger -p user.info -t JENKINS-CI 00:05:07.110 [Pipeline] sh 00:05:07.394 + logger -p user.info -t JENKINS-CI 00:05:07.408 [Pipeline] sh 00:05:07.691 + cat autorun-spdk.conf 00:05:07.691 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:07.691 SPDK_TEST_NVMF=1 00:05:07.691 SPDK_TEST_NVME_CLI=1 00:05:07.691 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:07.691 SPDK_TEST_NVMF_NICS=e810 00:05:07.691 SPDK_TEST_VFIOUSER=1 00:05:07.691 SPDK_RUN_UBSAN=1 00:05:07.691 NET_TYPE=phy 00:05:07.698 RUN_NIGHTLY=0 00:05:07.704 [Pipeline] readFile 00:05:07.726 [Pipeline] withEnv 00:05:07.727 [Pipeline] { 00:05:07.739 [Pipeline] sh 00:05:08.028 + set -ex 00:05:08.028 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:08.028 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:08.028 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:08.028 ++ SPDK_TEST_NVMF=1 00:05:08.028 ++ SPDK_TEST_NVME_CLI=1 00:05:08.028 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:08.028 ++ SPDK_TEST_NVMF_NICS=e810 00:05:08.028 ++ SPDK_TEST_VFIOUSER=1 00:05:08.028 ++ SPDK_RUN_UBSAN=1 00:05:08.028 ++ NET_TYPE=phy 00:05:08.028 ++ RUN_NIGHTLY=0 00:05:08.028 + case $SPDK_TEST_NVMF_NICS in 00:05:08.028 + DRIVERS=ice 00:05:08.028 + [[ tcp == \r\d\m\a ]] 00:05:08.028 + [[ -n ice ]] 00:05:08.028 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:08.028 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:08.028 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:08.028 rmmod: ERROR: Module irdma is not currently loaded 00:05:08.028 rmmod: ERROR: Module i40iw is not currently loaded 00:05:08.028 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:08.028 + true 00:05:08.028 + for D in $DRIVERS 00:05:08.028 + sudo modprobe ice 00:05:08.028 + exit 0 00:05:08.038 [Pipeline] } 00:05:08.053 [Pipeline] // withEnv 00:05:08.059 [Pipeline] } 00:05:08.073 [Pipeline] // stage 00:05:08.079 [Pipeline] catchError 00:05:08.081 [Pipeline] { 00:05:08.094 [Pipeline] timeout 00:05:08.094 Timeout set to expire in 1 hr 0 min 00:05:08.096 [Pipeline] { 00:05:08.109 [Pipeline] stage 00:05:08.111 [Pipeline] { (Tests) 00:05:08.125 [Pipeline] sh 00:05:08.412 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:08.412 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:08.412 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:08.412 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:08.412 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.412 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:08.412 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:08.412 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:08.412 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:08.412 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:08.412 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:08.412 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:08.412 + source /etc/os-release 00:05:08.412 ++ NAME='Fedora Linux' 00:05:08.412 ++ VERSION='39 (Cloud Edition)' 00:05:08.412 ++ ID=fedora 00:05:08.412 ++ VERSION_ID=39 00:05:08.412 ++ VERSION_CODENAME= 00:05:08.412 ++ PLATFORM_ID=platform:f39 00:05:08.412 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:08.412 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:08.412 ++ LOGO=fedora-logo-icon 00:05:08.412 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:08.412 ++ HOME_URL=https://fedoraproject.org/ 00:05:08.412 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:08.412 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:08.412 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:08.412 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:08.412 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:08.412 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:08.412 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:08.412 ++ SUPPORT_END=2024-11-12 00:05:08.412 ++ VARIANT='Cloud Edition' 00:05:08.412 ++ VARIANT_ID=cloud 00:05:08.412 + uname -a 00:05:08.412 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:08.412 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:10.379 Hugepages 00:05:10.379 node hugesize free / total 00:05:10.379 node0 1048576kB 0 / 0 00:05:10.379 node0 2048kB 0 / 0 00:05:10.379 node1 1048576kB 0 / 0 00:05:10.379 node1 2048kB 0 / 0 00:05:10.379 00:05:10.379 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.379 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:10.379 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:10.379 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:10.379 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:10.379 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:10.379 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:10.380 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:10.380 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:10.380 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:10.380 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:10.380 + rm -f /tmp/spdk-ld-path 00:05:10.380 + source autorun-spdk.conf 00:05:10.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.380 ++ SPDK_TEST_NVMF=1 00:05:10.380 ++ SPDK_TEST_NVME_CLI=1 00:05:10.380 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.380 ++ SPDK_TEST_NVMF_NICS=e810 00:05:10.380 ++ SPDK_TEST_VFIOUSER=1 00:05:10.380 ++ SPDK_RUN_UBSAN=1 00:05:10.380 ++ NET_TYPE=phy 00:05:10.380 ++ RUN_NIGHTLY=0 00:05:10.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:10.380 + [[ -n '' ]] 00:05:10.380 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.380 + for M in /var/spdk/build-*-manifest.txt 00:05:10.380 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:10.380 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:10.380 + for M in /var/spdk/build-*-manifest.txt 00:05:10.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:10.380 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:10.380 + for M in /var/spdk/build-*-manifest.txt 00:05:10.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:10.380 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:10.380 ++ uname 00:05:10.380 + [[ Linux == \L\i\n\u\x ]] 00:05:10.380 + sudo dmesg -T 00:05:10.380 + sudo dmesg --clear 00:05:10.380 + dmesg_pid=1876142 00:05:10.380 + [[ Fedora Linux == FreeBSD ]] 00:05:10.380 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.380 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:10.380 + sudo dmesg -Tw 00:05:10.380 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:10.380 + [[ -x /usr/src/fio-static/fio ]] 00:05:10.380 + export FIO_BIN=/usr/src/fio-static/fio 00:05:10.380 + FIO_BIN=/usr/src/fio-static/fio 00:05:10.380 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:10.380 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:10.380 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:10.380 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.380 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:10.380 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:10.380 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.380 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:10.380 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:10.380 10:15:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:10.380 10:15:54 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:10.380 10:15:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:10.380 10:15:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:10.380 10:15:54 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:10.380 10:15:54 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:10.380 10:15:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.380 10:15:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:10.380 10:15:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:10.380 10:15:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.380 10:15:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.380 10:15:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 10:15:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 10:15:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 10:15:54 -- paths/export.sh@5 -- $ export PATH 00:05:10.380 10:15:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.380 10:15:54 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:10.380 10:15:54 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:10.380 10:15:54 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733735754.XXXXXX 00:05:10.380 10:15:54 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733735754.y0rhPK 00:05:10.380 10:15:54 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:10.380 10:15:54 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:10.380 10:15:54 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:10.380 10:15:54 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:10.380 10:15:54 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:10.380 10:15:54 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:10.380 10:15:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:10.380 10:15:54 -- common/autotest_common.sh@10 -- $ set +x 00:05:10.380 10:15:54 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:10.380 10:15:54 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:10.380 10:15:54 -- pm/common@17 -- $ local monitor 00:05:10.380 10:15:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.380 10:15:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.380 10:15:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.380 10:15:54 -- pm/common@21 -- $ date +%s 00:05:10.380 10:15:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.380 10:15:54 -- pm/common@21 -- $ date +%s 00:05:10.380 10:15:54 -- pm/common@25 -- $ sleep 1 00:05:10.380 10:15:54 -- pm/common@21 -- $ date +%s 00:05:10.380 10:15:54 -- pm/common@21 -- $ date +%s 00:05:10.380 10:15:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735754 00:05:10.380 10:15:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735754 00:05:10.380 10:15:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735754 00:05:10.380 10:15:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735754 00:05:10.380 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735754_collect-cpu-temp.pm.log 00:05:10.380 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735754_collect-vmstat.pm.log 00:05:10.380 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735754_collect-cpu-load.pm.log 00:05:10.380 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735754_collect-bmc-pm.bmc.pm.log 00:05:11.760 10:15:55 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:11.760 10:15:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:11.760 10:15:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:11.760 10:15:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.760 10:15:55 -- spdk/autobuild.sh@16 -- $ date -u 00:05:11.760 Mon Dec 9 09:15:55 AM UTC 2024 00:05:11.760 10:15:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:11.760 v25.01-pre-314-gb7d7c4b24 00:05:11.760 10:15:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:11.760 10:15:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:11.760 10:15:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:11.760 10:15:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:11.760 10:15:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:11.760 10:15:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:11.760 ************************************ 00:05:11.760 START TEST ubsan 00:05:11.760 ************************************ 00:05:11.760 10:15:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:11.760 using ubsan 00:05:11.760 00:05:11.760 real 0m0.000s 00:05:11.760 user 0m0.000s 00:05:11.760 sys 0m0.000s 00:05:11.760 10:15:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:11.760 10:15:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:11.760 ************************************ 00:05:11.760 END TEST ubsan 00:05:11.760 ************************************ 00:05:11.760 10:15:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:11.760 10:15:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:11.760 10:15:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:11.760 10:15:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:11.760 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:11.760 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:12.329 Using 'verbs' RDMA provider 00:05:28.166 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:46.272 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:46.272 Creating mk/config.mk...done. 00:05:46.272 Creating mk/cc.flags.mk...done. 00:05:46.272 Type 'make' to build. 00:05:46.272 10:16:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:05:46.272 10:16:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:46.272 10:16:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:46.272 10:16:28 -- common/autotest_common.sh@10 -- $ set +x 00:05:46.272 ************************************ 00:05:46.272 START TEST make 00:05:46.272 ************************************ 00:05:46.272 10:16:28 make -- common/autotest_common.sh@1129 -- $ make -j48 00:05:46.272 make[1]: Nothing to be done for 'all'. 00:05:46.272 The Meson build system 00:05:46.272 Version: 1.5.0 00:05:46.272 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:46.272 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:46.272 Build type: native build 00:05:46.272 Project name: libvfio-user 00:05:46.272 Project version: 0.0.1 00:05:46.272 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:46.272 C linker for the host machine: cc ld.bfd 2.40-14 00:05:46.272 Host machine cpu family: x86_64 00:05:46.272 Host machine cpu: x86_64 00:05:46.272 Run-time dependency threads found: YES 00:05:46.272 Library dl found: YES 00:05:46.272 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:46.272 Run-time dependency json-c found: YES 0.17 00:05:46.272 Run-time dependency cmocka found: YES 1.1.7 00:05:46.272 Program pytest-3 found: NO 00:05:46.272 Program flake8 found: NO 00:05:46.272 Program misspell-fixer found: NO 00:05:46.272 Program restructuredtext-lint found: NO 00:05:46.272 Program valgrind found: YES (/usr/bin/valgrind) 00:05:46.272 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:46.272 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:46.272 Compiler for C supports arguments -Wwrite-strings: YES 00:05:46.272 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:46.272 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:46.272 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:46.272 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:46.272 Build targets in project: 8 00:05:46.272 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:46.272 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:46.272 00:05:46.272 libvfio-user 0.0.1 00:05:46.272 00:05:46.272 User defined options 00:05:46.272 buildtype : debug 00:05:46.272 default_library: shared 00:05:46.272 libdir : /usr/local/lib 00:05:46.272 00:05:46.272 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:47.222 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:47.482 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:47.482 [2/37] Compiling C object samples/null.p/null.c.o 00:05:47.482 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:47.482 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:47.482 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:47.482 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:47.482 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:47.482 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:47.482 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:47.482 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:47.482 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:47.482 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:47.482 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:47.483 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:47.483 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:47.483 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:47.483 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:47.483 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:47.483 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:47.747 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:47.747 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:47.747 [22/37] Compiling C object samples/client.p/client.c.o 00:05:47.747 [23/37] Compiling C object samples/server.p/server.c.o 00:05:47.747 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:47.747 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:47.747 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:47.747 [27/37] Linking target samples/client 00:05:47.747 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:47.747 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:47.747 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:48.010 [31/37] Linking target test/unit_tests 00:05:48.010 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:48.010 [33/37] Linking target samples/server 00:05:48.010 [34/37] Linking target samples/gpio-pci-idio-16 00:05:48.010 [35/37] Linking target samples/shadow_ioeventfd_server 00:05:48.010 [36/37] Linking target samples/null 00:05:48.272 [37/37] Linking target samples/lspci 00:05:48.272 INFO: autodetecting backend as ninja 00:05:48.272 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:48.272 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:49.213 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:49.213 ninja: no work to do. 00:05:54.508 The Meson build system 00:05:54.508 Version: 1.5.0 00:05:54.508 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:54.508 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:54.508 Build type: native build 00:05:54.508 Program cat found: YES (/usr/bin/cat) 00:05:54.508 Project name: DPDK 00:05:54.508 Project version: 24.03.0 00:05:54.508 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:54.508 C linker for the host machine: cc ld.bfd 2.40-14 00:05:54.508 Host machine cpu family: x86_64 00:05:54.508 Host machine cpu: x86_64 00:05:54.508 Message: ## Building in Developer Mode ## 00:05:54.508 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:54.508 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:54.508 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:54.508 Program python3 found: YES (/usr/bin/python3) 00:05:54.508 Program cat found: YES (/usr/bin/cat) 00:05:54.508 Compiler for C supports arguments -march=native: YES 00:05:54.508 Checking for size of "void *" : 8 00:05:54.508 Checking for size of "void *" : 8 (cached) 00:05:54.508 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:54.508 Library m found: YES 00:05:54.508 Library numa found: YES 00:05:54.508 Has header "numaif.h" : YES 00:05:54.508 Library fdt found: NO 00:05:54.508 Library execinfo found: NO 00:05:54.508 Has header "execinfo.h" : YES 00:05:54.508 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:54.508 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:54.508 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:54.508 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:54.509 Run-time dependency openssl found: YES 3.1.1 00:05:54.509 Run-time dependency libpcap found: YES 1.10.4 00:05:54.509 Has header "pcap.h" with dependency libpcap: YES 00:05:54.509 Compiler for C supports arguments -Wcast-qual: YES 00:05:54.509 Compiler for C supports arguments -Wdeprecated: YES 00:05:54.509 Compiler for C supports arguments -Wformat: YES 00:05:54.509 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:54.509 Compiler for C supports arguments -Wformat-security: NO 00:05:54.509 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:54.509 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:54.509 Compiler for C supports arguments -Wnested-externs: YES 00:05:54.509 Compiler for C supports arguments -Wold-style-definition: YES 00:05:54.509 Compiler for C supports arguments -Wpointer-arith: YES 00:05:54.509 Compiler for C supports arguments -Wsign-compare: YES 00:05:54.509 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:54.509 Compiler for C supports arguments -Wundef: YES 00:05:54.509 Compiler for C supports arguments -Wwrite-strings: YES 00:05:54.509 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:54.509 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:54.509 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:54.509 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:54.509 Program objdump found: YES (/usr/bin/objdump) 00:05:54.509 Compiler for C supports arguments -mavx512f: YES 00:05:54.509 Checking if "AVX512 checking" compiles: YES 00:05:54.509 Fetching value of define "__SSE4_2__" : 1 00:05:54.509 Fetching value of define "__AES__" : 1 00:05:54.509 Fetching value of define "__AVX__" : 1 00:05:54.509 Fetching value of define "__AVX2__" : (undefined) 00:05:54.509 Fetching value of define "__AVX512BW__" : (undefined) 00:05:54.509 Fetching value of define "__AVX512CD__" : (undefined) 00:05:54.509 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:54.509 Fetching value of define "__AVX512F__" : (undefined) 00:05:54.509 Fetching value of define "__AVX512VL__" : (undefined) 00:05:54.509 Fetching value of define "__PCLMUL__" : 1 00:05:54.509 Fetching value of define "__RDRND__" : 1 00:05:54.509 Fetching value of define "__RDSEED__" : (undefined) 00:05:54.509 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:54.509 Fetching value of define "__znver1__" : (undefined) 00:05:54.509 Fetching value of define "__znver2__" : (undefined) 00:05:54.509 Fetching value of define "__znver3__" : (undefined) 00:05:54.509 Fetching value of define "__znver4__" : (undefined) 00:05:54.509 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:54.509 Message: lib/log: Defining dependency "log" 00:05:54.509 Message: lib/kvargs: Defining dependency "kvargs" 00:05:54.509 Message: lib/telemetry: Defining dependency "telemetry" 00:05:54.509 Checking for function "getentropy" : NO 00:05:54.509 Message: lib/eal: Defining dependency "eal" 00:05:54.509 Message: lib/ring: Defining dependency "ring" 00:05:54.509 Message: lib/rcu: Defining dependency "rcu" 00:05:54.509 Message: lib/mempool: Defining dependency "mempool" 00:05:54.509 Message: lib/mbuf: Defining dependency "mbuf" 00:05:54.509 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:54.509 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:54.509 Compiler for C supports arguments -mpclmul: YES 00:05:54.509 Compiler for C supports arguments -maes: YES 00:05:54.509 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:54.509 Compiler for C supports arguments -mavx512bw: YES 00:05:54.509 Compiler for C supports arguments -mavx512dq: YES 00:05:54.509 Compiler for C supports arguments -mavx512vl: YES 00:05:54.509 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:54.509 Compiler for C supports arguments -mavx2: YES 00:05:54.509 Compiler for C supports arguments -mavx: YES 00:05:54.509 Message: lib/net: Defining dependency "net" 00:05:54.509 Message: lib/meter: Defining dependency "meter" 00:05:54.509 Message: lib/ethdev: Defining dependency "ethdev" 00:05:54.509 Message: lib/pci: Defining dependency "pci" 00:05:54.509 Message: lib/cmdline: Defining dependency "cmdline" 00:05:54.509 Message: lib/hash: Defining dependency "hash" 00:05:54.509 Message: lib/timer: Defining dependency "timer" 00:05:54.509 Message: lib/compressdev: Defining dependency "compressdev" 00:05:54.509 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:54.509 Message: lib/dmadev: Defining dependency "dmadev" 00:05:54.509 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:54.509 Message: lib/power: Defining dependency "power" 00:05:54.509 Message: lib/reorder: Defining dependency "reorder" 00:05:54.509 Message: lib/security: Defining dependency "security" 00:05:54.509 Has header "linux/userfaultfd.h" : YES 00:05:54.509 Has header "linux/vduse.h" : YES 00:05:54.509 Message: lib/vhost: Defining dependency "vhost" 00:05:54.509 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:54.509 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:54.509 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:54.509 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:54.509 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:54.509 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:54.509 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:54.509 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:54.509 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:54.509 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:54.509 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:54.509 Configuring doxy-api-html.conf using configuration 00:05:54.509 Configuring doxy-api-man.conf using configuration 00:05:54.509 Program mandb found: YES (/usr/bin/mandb) 00:05:54.509 Program sphinx-build found: NO 00:05:54.509 Configuring rte_build_config.h using configuration 00:05:54.509 Message: 00:05:54.509 ================= 00:05:54.509 Applications Enabled 00:05:54.509 ================= 00:05:54.509 00:05:54.509 apps: 00:05:54.509 00:05:54.509 00:05:54.509 Message: 00:05:54.509 ================= 00:05:54.509 Libraries Enabled 00:05:54.509 ================= 00:05:54.509 00:05:54.509 libs: 00:05:54.509 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:54.509 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:54.509 cryptodev, dmadev, power, reorder, security, vhost, 00:05:54.509 00:05:54.509 Message: 00:05:54.509 =============== 00:05:54.509 Drivers Enabled 00:05:54.509 =============== 00:05:54.509 00:05:54.509 common: 00:05:54.509 00:05:54.509 bus: 00:05:54.509 pci, vdev, 00:05:54.509 mempool: 00:05:54.509 ring, 00:05:54.509 dma: 00:05:54.509 00:05:54.509 net: 00:05:54.509 00:05:54.509 crypto: 00:05:54.509 00:05:54.509 compress: 00:05:54.509 00:05:54.509 vdpa: 00:05:54.509 00:05:54.509 00:05:54.509 Message: 00:05:54.509 ================= 00:05:54.509 Content Skipped 00:05:54.509 ================= 00:05:54.509 00:05:54.509 apps: 00:05:54.509 dumpcap: explicitly disabled via build config 00:05:54.509 graph: explicitly disabled via build config 00:05:54.509 pdump: explicitly disabled via build config 00:05:54.509 proc-info: explicitly disabled via build config 00:05:54.509 test-acl: explicitly disabled via build config 00:05:54.509 test-bbdev: explicitly disabled via build config 00:05:54.509 test-cmdline: explicitly disabled via build config 00:05:54.509 test-compress-perf: explicitly disabled via build config 00:05:54.509 test-crypto-perf: explicitly disabled via build config 00:05:54.509 test-dma-perf: explicitly disabled via build config 00:05:54.509 test-eventdev: explicitly disabled via build config 00:05:54.509 test-fib: explicitly disabled via build config 00:05:54.509 test-flow-perf: explicitly disabled via build config 00:05:54.509 test-gpudev: explicitly disabled via build config 00:05:54.509 test-mldev: explicitly disabled via build config 00:05:54.509 test-pipeline: explicitly disabled via build config 00:05:54.509 test-pmd: explicitly disabled via build config 00:05:54.509 test-regex: explicitly disabled via build config 00:05:54.509 test-sad: explicitly disabled via build config 00:05:54.509 test-security-perf: explicitly disabled via build config 00:05:54.509 00:05:54.509 libs: 00:05:54.509 argparse: explicitly disabled via build config 00:05:54.509 metrics: explicitly disabled via build config 00:05:54.509 acl: explicitly disabled via build config 00:05:54.509 bbdev: explicitly disabled via build config 00:05:54.509 bitratestats: explicitly disabled via build config 00:05:54.509 bpf: explicitly disabled via build config 00:05:54.509 cfgfile: explicitly disabled via build config 00:05:54.509 distributor: explicitly disabled via build config 00:05:54.509 efd: explicitly disabled via build config 00:05:54.509 eventdev: explicitly disabled via build config 00:05:54.509 dispatcher: explicitly disabled via build config 00:05:54.509 gpudev: explicitly disabled via build config 00:05:54.509 gro: explicitly disabled via build config 00:05:54.509 gso: explicitly disabled via build config 00:05:54.509 ip_frag: explicitly disabled via build config 00:05:54.509 jobstats: explicitly disabled via build config 00:05:54.509 latencystats: explicitly disabled via build config 00:05:54.509 lpm: explicitly disabled via build config 00:05:54.509 member: explicitly disabled via build config 00:05:54.509 pcapng: explicitly disabled via build config 00:05:54.509 rawdev: explicitly disabled via build config 00:05:54.509 regexdev: explicitly disabled via build config 00:05:54.509 mldev: explicitly disabled via build config 00:05:54.509 rib: explicitly disabled via build config 00:05:54.509 sched: explicitly disabled via build config 00:05:54.509 stack: explicitly disabled via build config 00:05:54.509 ipsec: explicitly disabled via build config 00:05:54.509 pdcp: explicitly disabled via build config 00:05:54.509 fib: explicitly disabled via build config 00:05:54.509 port: explicitly disabled via build config 00:05:54.509 pdump: explicitly disabled via build config 00:05:54.509 table: explicitly disabled via build config 00:05:54.510 pipeline: explicitly disabled via build config 00:05:54.510 graph: explicitly disabled via build config 00:05:54.510 node: explicitly disabled via build config 00:05:54.510 00:05:54.510 drivers: 00:05:54.510 common/cpt: not in enabled drivers build config 00:05:54.510 common/dpaax: not in enabled drivers build config 00:05:54.510 common/iavf: not in enabled drivers build config 00:05:54.510 common/idpf: not in enabled drivers build config 00:05:54.510 common/ionic: not in enabled drivers build config 00:05:54.510 common/mvep: not in enabled drivers build config 00:05:54.510 common/octeontx: not in enabled drivers build config 00:05:54.510 bus/auxiliary: not in enabled drivers build config 00:05:54.510 bus/cdx: not in enabled drivers build config 00:05:54.510 bus/dpaa: not in enabled drivers build config 00:05:54.510 bus/fslmc: not in enabled drivers build config 00:05:54.510 bus/ifpga: not in enabled drivers build config 00:05:54.510 bus/platform: not in enabled drivers build config 00:05:54.510 bus/uacce: not in enabled drivers build config 00:05:54.510 bus/vmbus: not in enabled drivers build config 00:05:54.510 common/cnxk: not in enabled drivers build config 00:05:54.510 common/mlx5: not in enabled drivers build config 00:05:54.510 common/nfp: not in enabled drivers build config 00:05:54.510 common/nitrox: not in enabled drivers build config 00:05:54.510 common/qat: not in enabled drivers build config 00:05:54.510 common/sfc_efx: not in enabled drivers build config 00:05:54.510 mempool/bucket: not in enabled drivers build config 00:05:54.510 mempool/cnxk: not in enabled drivers build config 00:05:54.510 mempool/dpaa: not in enabled drivers build config 00:05:54.510 mempool/dpaa2: not in enabled drivers build config 00:05:54.510 mempool/octeontx: not in enabled drivers build config 00:05:54.510 mempool/stack: not in enabled drivers build config 00:05:54.510 dma/cnxk: not in enabled drivers build config 00:05:54.510 dma/dpaa: not in enabled drivers build config 00:05:54.510 dma/dpaa2: not in enabled drivers build config 00:05:54.510 dma/hisilicon: not in enabled drivers build config 00:05:54.510 dma/idxd: not in enabled drivers build config 00:05:54.510 dma/ioat: not in enabled drivers build config 00:05:54.510 dma/skeleton: not in enabled drivers build config 00:05:54.510 net/af_packet: not in enabled drivers build config 00:05:54.510 net/af_xdp: not in enabled drivers build config 00:05:54.510 net/ark: not in enabled drivers build config 00:05:54.510 net/atlantic: not in enabled drivers build config 00:05:54.510 net/avp: not in enabled drivers build config 00:05:54.510 net/axgbe: not in enabled drivers build config 00:05:54.510 net/bnx2x: not in enabled drivers build config 00:05:54.510 net/bnxt: not in enabled drivers build config 00:05:54.510 net/bonding: not in enabled drivers build config 00:05:54.510 net/cnxk: not in enabled drivers build config 00:05:54.510 net/cpfl: not in enabled drivers build config 00:05:54.510 net/cxgbe: not in enabled drivers build config 00:05:54.510 net/dpaa: not in enabled drivers build config 00:05:54.510 net/dpaa2: not in enabled drivers build config 00:05:54.510 net/e1000: not in enabled drivers build config 00:05:54.510 net/ena: not in enabled drivers build config 00:05:54.510 net/enetc: not in enabled drivers build config 00:05:54.510 net/enetfec: not in enabled drivers build config 00:05:54.510 net/enic: not in enabled drivers build config 00:05:54.510 net/failsafe: not in enabled drivers build config 00:05:54.510 net/fm10k: not in enabled drivers build config 00:05:54.510 net/gve: not in enabled drivers build config 00:05:54.510 net/hinic: not in enabled drivers build config 00:05:54.510 net/hns3: not in enabled drivers build config 00:05:54.510 net/i40e: not in enabled drivers build config 00:05:54.510 net/iavf: not in enabled drivers build config 00:05:54.510 net/ice: not in enabled drivers build config 00:05:54.510 net/idpf: not in enabled drivers build config 00:05:54.510 net/igc: not in enabled drivers build config 00:05:54.510 net/ionic: not in enabled drivers build config 00:05:54.510 net/ipn3ke: not in enabled drivers build config 00:05:54.510 net/ixgbe: not in enabled drivers build config 00:05:54.510 net/mana: not in enabled drivers build config 00:05:54.510 net/memif: not in enabled drivers build config 00:05:54.510 net/mlx4: not in enabled drivers build config 00:05:54.510 net/mlx5: not in enabled drivers build config 00:05:54.510 net/mvneta: not in enabled drivers build config 00:05:54.510 net/mvpp2: not in enabled drivers build config 00:05:54.510 net/netvsc: not in enabled drivers build config 00:05:54.510 net/nfb: not in enabled drivers build config 00:05:54.510 net/nfp: not in enabled drivers build config 00:05:54.510 net/ngbe: not in enabled drivers build config 00:05:54.510 net/null: not in enabled drivers build config 00:05:54.510 net/octeontx: not in enabled drivers build config 00:05:54.510 net/octeon_ep: not in enabled drivers build config 00:05:54.510 net/pcap: not in enabled drivers build config 00:05:54.510 net/pfe: not in enabled drivers build config 00:05:54.510 net/qede: not in enabled drivers build config 00:05:54.510 net/ring: not in enabled drivers build config 00:05:54.510 net/sfc: not in enabled drivers build config 00:05:54.510 net/softnic: not in enabled drivers build config 00:05:54.510 net/tap: not in enabled drivers build config 00:05:54.510 net/thunderx: not in enabled drivers build config 00:05:54.510 net/txgbe: not in enabled drivers build config 00:05:54.510 net/vdev_netvsc: not in enabled drivers build config 00:05:54.510 net/vhost: not in enabled drivers build config 00:05:54.510 net/virtio: not in enabled drivers build config 00:05:54.510 net/vmxnet3: not in enabled drivers build config 00:05:54.510 raw/*: missing internal dependency, "rawdev" 00:05:54.510 crypto/armv8: not in enabled drivers build config 00:05:54.510 crypto/bcmfs: not in enabled drivers build config 00:05:54.510 crypto/caam_jr: not in enabled drivers build config 00:05:54.510 crypto/ccp: not in enabled drivers build config 00:05:54.510 crypto/cnxk: not in enabled drivers build config 00:05:54.510 crypto/dpaa_sec: not in enabled drivers build config 00:05:54.510 crypto/dpaa2_sec: not in enabled drivers build config 00:05:54.510 crypto/ipsec_mb: not in enabled drivers build config 00:05:54.510 crypto/mlx5: not in enabled drivers build config 00:05:54.510 crypto/mvsam: not in enabled drivers build config 00:05:54.510 crypto/nitrox: not in enabled drivers build config 00:05:54.510 crypto/null: not in enabled drivers build config 00:05:54.510 crypto/octeontx: not in enabled drivers build config 00:05:54.510 crypto/openssl: not in enabled drivers build config 00:05:54.510 crypto/scheduler: not in enabled drivers build config 00:05:54.510 crypto/uadk: not in enabled drivers build config 00:05:54.510 crypto/virtio: not in enabled drivers build config 00:05:54.510 compress/isal: not in enabled drivers build config 00:05:54.510 compress/mlx5: not in enabled drivers build config 00:05:54.510 compress/nitrox: not in enabled drivers build config 00:05:54.510 compress/octeontx: not in enabled drivers build config 00:05:54.510 compress/zlib: not in enabled drivers build config 00:05:54.510 regex/*: missing internal dependency, "regexdev" 00:05:54.510 ml/*: missing internal dependency, "mldev" 00:05:54.510 vdpa/ifc: not in enabled drivers build config 00:05:54.510 vdpa/mlx5: not in enabled drivers build config 00:05:54.510 vdpa/nfp: not in enabled drivers build config 00:05:54.510 vdpa/sfc: not in enabled drivers build config 00:05:54.510 event/*: missing internal dependency, "eventdev" 00:05:54.510 baseband/*: missing internal dependency, "bbdev" 00:05:54.510 gpu/*: missing internal dependency, "gpudev" 00:05:54.510 00:05:54.510 00:05:54.510 Build targets in project: 85 00:05:54.510 00:05:54.510 DPDK 24.03.0 00:05:54.510 00:05:54.510 User defined options 00:05:54.510 buildtype : debug 00:05:54.510 default_library : shared 00:05:54.510 libdir : lib 00:05:54.510 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:54.510 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:54.510 c_link_args : 00:05:54.510 cpu_instruction_set: native 00:05:54.510 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:54.510 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:54.510 enable_docs : false 00:05:54.510 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:54.510 enable_kmods : false 00:05:54.510 max_lcores : 128 00:05:54.510 tests : false 00:05:54.510 00:05:54.510 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:55.084 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:55.084 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:55.084 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:55.084 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:55.084 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:55.084 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:55.084 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:55.084 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:55.084 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:55.084 [9/268] Linking static target lib/librte_kvargs.a 00:05:55.346 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:55.346 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:55.346 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:55.346 [13/268] Linking static target lib/librte_log.a 00:05:55.346 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:55.346 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:55.346 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:55.920 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.920 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:55.920 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:55.920 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:56.184 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:56.184 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:56.184 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:56.184 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:56.184 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:56.184 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:56.184 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:56.184 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:56.184 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:56.184 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:56.184 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:56.184 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:56.184 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:56.184 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:56.184 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:56.184 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:56.184 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:56.184 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:56.184 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:56.185 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:56.185 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:56.185 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:56.185 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:56.185 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:56.185 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:56.185 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:56.185 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:56.185 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:56.185 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:56.185 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:56.185 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:56.185 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:56.185 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:56.185 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:56.185 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:56.185 [56/268] Linking static target lib/librte_telemetry.a 00:05:56.185 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:56.185 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:56.185 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:56.446 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:56.446 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:56.446 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.446 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:56.446 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:56.446 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:56.446 [66/268] Linking target lib/librte_log.so.24.1 00:05:56.707 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:56.707 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:56.707 [69/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:56.707 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:56.707 [71/268] Linking static target lib/librte_pci.a 00:05:56.707 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:56.972 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:56.972 [74/268] Linking target lib/librte_kvargs.so.24.1 00:05:56.972 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:56.972 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:56.972 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:56.972 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:56.972 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:56.972 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:56.972 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:56.972 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:56.972 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:56.972 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:57.237 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:57.237 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:57.237 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:57.237 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:57.237 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:57.237 [90/268] Linking static target lib/librte_ring.a 00:05:57.237 [91/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:57.237 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:57.237 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:57.237 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:57.237 [95/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:57.237 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:57.237 [97/268] Linking static target lib/librte_meter.a 00:05:57.237 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:57.237 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:57.237 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:57.237 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:57.237 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:57.237 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:57.237 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:57.237 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:57.237 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:57.237 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:57.237 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.237 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:57.237 [110/268] Linking static target lib/librte_eal.a 00:05:57.237 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:57.237 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:57.506 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:57.506 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:57.506 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:57.506 [116/268] Linking static target lib/librte_rcu.a 00:05:57.506 [117/268] Linking static target lib/librte_mempool.a 00:05:57.506 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:57.506 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:57.506 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:57.506 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:57.506 [122/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.506 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:57.506 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:57.506 [125/268] Linking target lib/librte_telemetry.so.24.1 00:05:57.506 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:57.506 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:57.506 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:57.506 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:57.506 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:57.772 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:57.772 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:57.772 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.772 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:57.772 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:57.772 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.772 [137/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:58.031 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:58.031 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:58.031 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:58.031 [141/268] Linking static target lib/librte_net.a 00:05:58.031 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:58.031 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:58.031 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:58.031 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.031 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:58.031 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:58.031 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:58.031 [149/268] Linking static target lib/librte_cmdline.a 00:05:58.031 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:58.292 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:58.292 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:58.292 [153/268] Linking static target lib/librte_timer.a 00:05:58.292 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:58.292 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:58.292 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:58.292 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:58.292 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:58.292 [159/268] Linking static target lib/librte_dmadev.a 00:05:58.292 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.292 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:58.292 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:58.551 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:58.551 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:58.551 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:58.551 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:58.551 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:58.551 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.551 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:58.551 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:58.551 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:58.551 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.551 [173/268] Linking static target lib/librte_power.a 00:05:58.551 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:58.809 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:58.809 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:58.809 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:58.809 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:58.809 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:58.809 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:58.809 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:58.809 [182/268] Linking static target lib/librte_compressdev.a 00:05:58.809 [183/268] Linking static target lib/librte_hash.a 00:05:58.809 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:58.809 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:58.809 [186/268] Linking static target lib/librte_mbuf.a 00:05:58.809 [187/268] Linking static target lib/librte_reorder.a 00:05:58.809 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:58.809 [189/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:58.809 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:58.809 [191/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.809 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:59.068 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:59.068 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:59.068 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:59.068 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:59.068 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:59.068 [198/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.068 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:59.068 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:59.068 [201/268] Linking static target lib/librte_security.a 00:05:59.068 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:59.068 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:59.068 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:59.068 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.326 [206/268] Linking static target drivers/librte_bus_vdev.a 00:05:59.326 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:59.326 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:59.326 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:59.326 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:59.326 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:59.326 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:59.326 [213/268] Linking static target drivers/librte_mempool_ring.a 00:05:59.326 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.326 [215/268] Linking static target drivers/librte_bus_pci.a 00:05:59.326 [216/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.326 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:59.326 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.326 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.584 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.584 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.843 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.843 [223/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:59.843 [224/268] Linking static target lib/librte_ethdev.a 00:06:00.104 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:00.104 [226/268] Linking static target lib/librte_cryptodev.a 00:06:01.479 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.046 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:04.576 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.576 [230/268] Linking target lib/librte_eal.so.24.1 00:06:04.576 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:04.576 [232/268] Linking target lib/librte_meter.so.24.1 00:06:04.576 [233/268] Linking target lib/librte_ring.so.24.1 00:06:04.576 [234/268] Linking target lib/librte_pci.so.24.1 00:06:04.576 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:04.576 [236/268] Linking target lib/librte_dmadev.so.24.1 00:06:04.576 [237/268] Linking target lib/librte_timer.so.24.1 00:06:04.837 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:04.837 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:04.837 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:04.837 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:04.837 [242/268] Linking target lib/librte_rcu.so.24.1 00:06:04.837 [243/268] Linking target lib/librte_mempool.so.24.1 00:06:04.837 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:04.837 [245/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.837 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:04.837 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:04.837 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:05.097 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:05.097 [250/268] Linking target lib/librte_mbuf.so.24.1 00:06:05.097 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:05.097 [252/268] Linking target lib/librte_compressdev.so.24.1 00:06:05.097 [253/268] Linking target lib/librte_reorder.so.24.1 00:06:05.097 [254/268] Linking target lib/librte_net.so.24.1 00:06:05.097 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:06:05.358 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:05.358 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:05.358 [258/268] Linking target lib/librte_hash.so.24.1 00:06:05.358 [259/268] Linking target lib/librte_cmdline.so.24.1 00:06:05.358 [260/268] Linking target lib/librte_ethdev.so.24.1 00:06:05.358 [261/268] Linking target lib/librte_security.so.24.1 00:06:05.618 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:05.618 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:05.618 [264/268] Linking target lib/librte_power.so.24.1 00:06:15.618 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:15.618 [266/268] Linking static target lib/librte_vhost.a 00:06:16.570 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.570 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:16.570 INFO: autodetecting backend as ninja 00:06:16.570 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:06:55.399 CC lib/ut/ut.o 00:06:55.399 CC lib/log/log.o 00:06:55.399 CC lib/log/log_flags.o 00:06:55.399 CC lib/ut_mock/mock.o 00:06:55.399 CC lib/log/log_deprecated.o 00:06:55.399 LIB libspdk_ut_mock.a 00:06:55.399 SO libspdk_ut_mock.so.6.0 00:06:55.399 LIB libspdk_ut.a 00:06:55.399 LIB libspdk_log.a 00:06:55.399 SO libspdk_ut.so.2.0 00:06:55.399 SO libspdk_log.so.7.1 00:06:55.399 SYMLINK libspdk_ut_mock.so 00:06:55.399 SYMLINK libspdk_ut.so 00:06:55.399 SYMLINK libspdk_log.so 00:06:55.399 CC lib/dma/dma.o 00:06:55.399 CC lib/util/base64.o 00:06:55.399 CC lib/util/bit_array.o 00:06:55.399 CC lib/util/crc16.o 00:06:55.399 CC lib/util/cpuset.o 00:06:55.399 CC lib/util/crc32c.o 00:06:55.399 CC lib/util/crc32.o 00:06:55.399 CC lib/util/crc64.o 00:06:55.399 CC lib/util/crc32_ieee.o 00:06:55.399 CC lib/util/dif.o 00:06:55.399 CC lib/util/fd.o 00:06:55.399 CC lib/util/fd_group.o 00:06:55.399 CC lib/util/file.o 00:06:55.399 CC lib/util/hexlify.o 00:06:55.399 CC lib/util/iov.o 00:06:55.399 CC lib/util/math.o 00:06:55.399 CC lib/util/net.o 00:06:55.399 CC lib/util/pipe.o 00:06:55.399 CC lib/util/strerror_tls.o 00:06:55.399 CC lib/util/string.o 00:06:55.399 CC lib/util/uuid.o 00:06:55.399 CC lib/util/zipf.o 00:06:55.399 CC lib/util/xor.o 00:06:55.399 CXX lib/trace_parser/trace.o 00:06:55.399 CC lib/util/md5.o 00:06:55.399 CC lib/ioat/ioat.o 00:06:55.399 CC lib/vfio_user/host/vfio_user_pci.o 00:06:55.399 CC lib/vfio_user/host/vfio_user.o 00:06:55.399 LIB libspdk_dma.a 00:06:55.399 LIB libspdk_ioat.a 00:06:55.399 SO libspdk_dma.so.5.0 00:06:55.399 SO libspdk_ioat.so.7.0 00:06:55.399 LIB libspdk_vfio_user.a 00:06:55.399 SO libspdk_vfio_user.so.5.0 00:06:55.399 SYMLINK libspdk_dma.so 00:06:55.399 SYMLINK libspdk_ioat.so 00:06:55.399 SYMLINK libspdk_vfio_user.so 00:06:55.399 LIB libspdk_util.a 00:06:55.399 SO libspdk_util.so.10.1 00:06:55.399 SYMLINK libspdk_util.so 00:06:55.399 LIB libspdk_trace_parser.a 00:06:55.399 SO libspdk_trace_parser.so.6.0 00:06:55.399 CC lib/vmd/vmd.o 00:06:55.399 CC lib/vmd/led.o 00:06:55.399 CC lib/env_dpdk/env.o 00:06:55.399 CC lib/env_dpdk/pci.o 00:06:55.399 CC lib/env_dpdk/memory.o 00:06:55.399 CC lib/env_dpdk/init.o 00:06:55.399 CC lib/env_dpdk/threads.o 00:06:55.399 CC lib/idxd/idxd.o 00:06:55.399 CC lib/env_dpdk/pci_ioat.o 00:06:55.399 CC lib/idxd/idxd_user.o 00:06:55.399 CC lib/env_dpdk/pci_virtio.o 00:06:55.399 CC lib/idxd/idxd_kernel.o 00:06:55.399 CC lib/env_dpdk/pci_vmd.o 00:06:55.399 CC lib/env_dpdk/pci_idxd.o 00:06:55.399 CC lib/env_dpdk/sigbus_handler.o 00:06:55.399 CC lib/env_dpdk/pci_event.o 00:06:55.399 CC lib/json/json_parse.o 00:06:55.399 CC lib/env_dpdk/pci_dpdk.o 00:06:55.399 CC lib/json/json_util.o 00:06:55.399 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:55.399 CC lib/json/json_write.o 00:06:55.399 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:55.399 CC lib/rdma_utils/rdma_utils.o 00:06:55.399 CC lib/conf/conf.o 00:06:55.399 SYMLINK libspdk_trace_parser.so 00:06:55.399 LIB libspdk_conf.a 00:06:55.399 SO libspdk_conf.so.6.0 00:06:55.399 LIB libspdk_json.a 00:06:55.399 SO libspdk_json.so.6.0 00:06:55.399 SYMLINK libspdk_conf.so 00:06:55.399 LIB libspdk_rdma_utils.a 00:06:55.399 SYMLINK libspdk_json.so 00:06:55.399 SO libspdk_rdma_utils.so.1.0 00:06:55.399 SYMLINK libspdk_rdma_utils.so 00:06:55.399 CC lib/jsonrpc/jsonrpc_server.o 00:06:55.399 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:55.399 CC lib/jsonrpc/jsonrpc_client.o 00:06:55.399 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:55.399 LIB libspdk_vmd.a 00:06:55.399 CC lib/rdma_provider/common.o 00:06:55.399 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:55.399 SO libspdk_vmd.so.6.0 00:06:55.399 SYMLINK libspdk_vmd.so 00:06:55.399 LIB libspdk_idxd.a 00:06:55.399 SO libspdk_idxd.so.12.1 00:06:55.399 LIB libspdk_jsonrpc.a 00:06:55.399 LIB libspdk_rdma_provider.a 00:06:55.399 SYMLINK libspdk_idxd.so 00:06:55.399 SO libspdk_rdma_provider.so.7.0 00:06:55.399 SO libspdk_jsonrpc.so.6.0 00:06:55.399 SYMLINK libspdk_rdma_provider.so 00:06:55.399 SYMLINK libspdk_jsonrpc.so 00:06:55.399 CC lib/rpc/rpc.o 00:06:55.399 LIB libspdk_rpc.a 00:06:55.399 SO libspdk_rpc.so.6.0 00:06:55.399 SYMLINK libspdk_rpc.so 00:06:55.399 CC lib/keyring/keyring.o 00:06:55.399 CC lib/keyring/keyring_rpc.o 00:06:55.399 CC lib/trace/trace.o 00:06:55.399 CC lib/trace/trace_rpc.o 00:06:55.399 CC lib/trace/trace_flags.o 00:06:55.399 CC lib/notify/notify.o 00:06:55.399 CC lib/notify/notify_rpc.o 00:06:55.399 LIB libspdk_notify.a 00:06:55.399 SO libspdk_notify.so.6.0 00:06:55.399 LIB libspdk_keyring.a 00:06:55.399 SO libspdk_keyring.so.2.0 00:06:55.399 SYMLINK libspdk_notify.so 00:06:55.399 LIB libspdk_trace.a 00:06:55.399 SYMLINK libspdk_keyring.so 00:06:55.399 SO libspdk_trace.so.11.0 00:06:55.399 SYMLINK libspdk_trace.so 00:06:55.659 CC lib/sock/sock.o 00:06:55.659 CC lib/sock/sock_rpc.o 00:06:55.659 CC lib/thread/iobuf.o 00:06:55.659 CC lib/thread/thread.o 00:06:56.625 LIB libspdk_sock.a 00:06:56.625 SO libspdk_sock.so.10.0 00:06:56.625 SYMLINK libspdk_sock.so 00:06:56.884 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:56.884 CC lib/nvme/nvme_fabric.o 00:06:56.884 CC lib/nvme/nvme_ctrlr.o 00:06:56.884 CC lib/nvme/nvme_ns.o 00:06:56.884 CC lib/nvme/nvme_ns_cmd.o 00:06:56.884 CC lib/nvme/nvme_pcie_common.o 00:06:56.884 CC lib/nvme/nvme_pcie.o 00:06:56.884 CC lib/nvme/nvme_qpair.o 00:06:56.884 CC lib/nvme/nvme.o 00:06:56.884 CC lib/nvme/nvme_quirks.o 00:06:56.884 CC lib/nvme/nvme_transport.o 00:06:56.884 CC lib/nvme/nvme_discovery.o 00:06:56.884 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:56.884 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:56.884 CC lib/nvme/nvme_tcp.o 00:06:56.884 CC lib/nvme/nvme_opal.o 00:06:56.884 CC lib/nvme/nvme_io_msg.o 00:06:56.884 CC lib/nvme/nvme_poll_group.o 00:06:56.884 CC lib/nvme/nvme_zns.o 00:06:56.884 CC lib/nvme/nvme_stubs.o 00:06:56.884 CC lib/nvme/nvme_auth.o 00:06:56.884 CC lib/nvme/nvme_cuse.o 00:06:56.884 CC lib/nvme/nvme_rdma.o 00:06:56.884 CC lib/nvme/nvme_vfio_user.o 00:06:56.884 LIB libspdk_env_dpdk.a 00:06:57.142 SO libspdk_env_dpdk.so.15.1 00:06:57.142 SYMLINK libspdk_env_dpdk.so 00:06:58.516 LIB libspdk_thread.a 00:06:58.516 SO libspdk_thread.so.11.0 00:06:58.516 SYMLINK libspdk_thread.so 00:06:58.516 CC lib/init/json_config.o 00:06:58.516 CC lib/virtio/virtio.o 00:06:58.516 CC lib/init/subsystem.o 00:06:58.516 CC lib/init/subsystem_rpc.o 00:06:58.516 CC lib/virtio/virtio_vhost_user.o 00:06:58.516 CC lib/init/rpc.o 00:06:58.516 CC lib/virtio/virtio_vfio_user.o 00:06:58.516 CC lib/fsdev/fsdev.o 00:06:58.516 CC lib/fsdev/fsdev_rpc.o 00:06:58.516 CC lib/fsdev/fsdev_io.o 00:06:58.516 CC lib/virtio/virtio_pci.o 00:06:58.516 CC lib/accel/accel.o 00:06:58.516 CC lib/accel/accel_rpc.o 00:06:58.516 CC lib/accel/accel_sw.o 00:06:58.516 CC lib/blob/blobstore.o 00:06:58.516 CC lib/blob/request.o 00:06:58.516 CC lib/vfu_tgt/tgt_endpoint.o 00:06:58.516 CC lib/blob/zeroes.o 00:06:58.516 CC lib/blob/blob_bs_dev.o 00:06:58.516 CC lib/vfu_tgt/tgt_rpc.o 00:06:58.775 LIB libspdk_init.a 00:06:58.775 SO libspdk_init.so.6.0 00:06:59.034 LIB libspdk_vfu_tgt.a 00:06:59.034 LIB libspdk_virtio.a 00:06:59.034 SYMLINK libspdk_init.so 00:06:59.034 SO libspdk_vfu_tgt.so.3.0 00:06:59.034 SO libspdk_virtio.so.7.0 00:06:59.034 SYMLINK libspdk_vfu_tgt.so 00:06:59.034 SYMLINK libspdk_virtio.so 00:06:59.293 CC lib/event/reactor.o 00:06:59.293 CC lib/event/app.o 00:06:59.293 CC lib/event/app_rpc.o 00:06:59.293 CC lib/event/log_rpc.o 00:06:59.293 CC lib/event/scheduler_static.o 00:06:59.554 LIB libspdk_fsdev.a 00:06:59.813 SO libspdk_fsdev.so.2.0 00:06:59.813 SYMLINK libspdk_fsdev.so 00:06:59.813 LIB libspdk_accel.a 00:06:59.813 SO libspdk_accel.so.16.0 00:07:00.071 SYMLINK libspdk_accel.so 00:07:00.071 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:00.071 LIB libspdk_event.a 00:07:00.071 CC lib/bdev/bdev.o 00:07:00.071 CC lib/bdev/bdev_zone.o 00:07:00.071 CC lib/bdev/bdev_rpc.o 00:07:00.071 SO libspdk_event.so.14.0 00:07:00.071 CC lib/bdev/part.o 00:07:00.071 CC lib/bdev/scsi_nvme.o 00:07:00.330 SYMLINK libspdk_event.so 00:07:00.895 LIB libspdk_nvme.a 00:07:00.895 LIB libspdk_fuse_dispatcher.a 00:07:00.895 SO libspdk_fuse_dispatcher.so.1.0 00:07:00.895 SO libspdk_nvme.so.15.0 00:07:00.895 SYMLINK libspdk_fuse_dispatcher.so 00:07:01.167 SYMLINK libspdk_nvme.so 00:07:02.542 LIB libspdk_blob.a 00:07:02.542 SO libspdk_blob.so.12.0 00:07:02.542 SYMLINK libspdk_blob.so 00:07:02.800 CC lib/lvol/lvol.o 00:07:02.800 CC lib/blobfs/blobfs.o 00:07:02.800 CC lib/blobfs/tree.o 00:07:04.177 LIB libspdk_blobfs.a 00:07:04.177 SO libspdk_blobfs.so.11.0 00:07:04.177 SYMLINK libspdk_blobfs.so 00:07:04.177 LIB libspdk_lvol.a 00:07:04.177 SO libspdk_lvol.so.11.0 00:07:04.177 SYMLINK libspdk_lvol.so 00:07:06.711 LIB libspdk_bdev.a 00:07:06.711 SO libspdk_bdev.so.17.0 00:07:06.970 SYMLINK libspdk_bdev.so 00:07:07.239 CC lib/nbd/nbd.o 00:07:07.239 CC lib/nbd/nbd_rpc.o 00:07:07.239 CC lib/scsi/dev.o 00:07:07.239 CC lib/scsi/lun.o 00:07:07.239 CC lib/scsi/port.o 00:07:07.239 CC lib/scsi/scsi.o 00:07:07.239 CC lib/scsi/scsi_bdev.o 00:07:07.239 CC lib/scsi/scsi_pr.o 00:07:07.239 CC lib/scsi/scsi_rpc.o 00:07:07.239 CC lib/ublk/ublk.o 00:07:07.239 CC lib/ublk/ublk_rpc.o 00:07:07.239 CC lib/scsi/task.o 00:07:07.239 CC lib/ftl/ftl_core.o 00:07:07.239 CC lib/ftl/ftl_init.o 00:07:07.239 CC lib/ftl/ftl_layout.o 00:07:07.239 CC lib/ftl/ftl_debug.o 00:07:07.239 CC lib/ftl/ftl_io.o 00:07:07.239 CC lib/ftl/ftl_sb.o 00:07:07.239 CC lib/ftl/ftl_l2p.o 00:07:07.239 CC lib/ftl/ftl_l2p_flat.o 00:07:07.239 CC lib/ftl/ftl_nv_cache.o 00:07:07.239 CC lib/ftl/ftl_band.o 00:07:07.239 CC lib/ftl/ftl_band_ops.o 00:07:07.239 CC lib/ftl/ftl_writer.o 00:07:07.239 CC lib/ftl/ftl_rq.o 00:07:07.239 CC lib/ftl/ftl_reloc.o 00:07:07.239 CC lib/ftl/ftl_l2p_cache.o 00:07:07.239 CC lib/ftl/ftl_p2l_log.o 00:07:07.239 CC lib/ftl/ftl_p2l.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:07.239 CC lib/nvmf/ctrlr.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:07.239 CC lib/nvmf/ctrlr_discovery.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:07.239 CC lib/nvmf/ctrlr_bdev.o 00:07:07.239 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:07.239 CC lib/nvmf/subsystem.o 00:07:07.239 CC lib/ftl/utils/ftl_conf.o 00:07:07.239 CC lib/nvmf/nvmf.o 00:07:07.497 CC lib/nvmf/nvmf_rpc.o 00:07:07.497 CC lib/ftl/utils/ftl_md.o 00:07:07.497 CC lib/ftl/utils/ftl_mempool.o 00:07:07.497 CC lib/nvmf/transport.o 00:07:07.497 CC lib/ftl/utils/ftl_bitmap.o 00:07:07.760 CC lib/nvmf/tcp.o 00:07:07.760 CC lib/ftl/utils/ftl_property.o 00:07:07.760 CC lib/nvmf/stubs.o 00:07:07.760 CC lib/nvmf/mdns_server.o 00:07:07.760 CC lib/nvmf/vfio_user.o 00:07:07.760 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:07.760 CC lib/nvmf/rdma.o 00:07:07.760 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:07.760 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:07.760 CC lib/nvmf/auth.o 00:07:07.760 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:07.760 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:07.760 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:07.760 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:07.760 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:07.760 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:07.760 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:07.760 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:07.760 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:07.760 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:07.760 CC lib/ftl/base/ftl_base_dev.o 00:07:07.760 CC lib/ftl/base/ftl_base_bdev.o 00:07:07.760 CC lib/ftl/ftl_trace.o 00:07:08.018 LIB libspdk_nbd.a 00:07:08.018 SO libspdk_nbd.so.7.0 00:07:08.018 LIB libspdk_scsi.a 00:07:08.275 SYMLINK libspdk_nbd.so 00:07:08.275 SO libspdk_scsi.so.9.0 00:07:08.275 SYMLINK libspdk_scsi.so 00:07:08.275 LIB libspdk_ublk.a 00:07:08.275 SO libspdk_ublk.so.3.0 00:07:08.275 SYMLINK libspdk_ublk.so 00:07:08.534 CC lib/vhost/vhost.o 00:07:08.534 CC lib/iscsi/conn.o 00:07:08.534 CC lib/vhost/vhost_rpc.o 00:07:08.534 CC lib/iscsi/init_grp.o 00:07:08.534 CC lib/vhost/vhost_scsi.o 00:07:08.534 CC lib/iscsi/iscsi.o 00:07:08.534 CC lib/vhost/vhost_blk.o 00:07:08.534 CC lib/iscsi/param.o 00:07:08.534 CC lib/iscsi/portal_grp.o 00:07:08.534 CC lib/vhost/rte_vhost_user.o 00:07:08.534 CC lib/iscsi/tgt_node.o 00:07:08.534 CC lib/iscsi/iscsi_subsystem.o 00:07:08.534 CC lib/iscsi/iscsi_rpc.o 00:07:08.534 CC lib/iscsi/task.o 00:07:08.534 LIB libspdk_ftl.a 00:07:08.793 SO libspdk_ftl.so.9.0 00:07:09.053 SYMLINK libspdk_ftl.so 00:07:09.994 LIB libspdk_iscsi.a 00:07:09.994 SO libspdk_iscsi.so.8.0 00:07:10.254 SYMLINK libspdk_iscsi.so 00:07:10.254 LIB libspdk_nvmf.a 00:07:10.513 SO libspdk_nvmf.so.20.0 00:07:10.773 LIB libspdk_vhost.a 00:07:10.773 SYMLINK libspdk_nvmf.so 00:07:11.033 SO libspdk_vhost.so.8.0 00:07:11.033 SYMLINK libspdk_vhost.so 00:07:11.599 CC module/env_dpdk/env_dpdk_rpc.o 00:07:11.599 CC module/vfu_device/vfu_virtio.o 00:07:11.599 CC module/vfu_device/vfu_virtio_blk.o 00:07:11.599 CC module/vfu_device/vfu_virtio_scsi.o 00:07:11.599 CC module/vfu_device/vfu_virtio_rpc.o 00:07:11.599 CC module/vfu_device/vfu_virtio_fs.o 00:07:11.599 CC module/blob/bdev/blob_bdev.o 00:07:11.599 CC module/accel/iaa/accel_iaa.o 00:07:11.599 CC module/accel/iaa/accel_iaa_rpc.o 00:07:11.599 CC module/accel/error/accel_error.o 00:07:11.599 CC module/accel/error/accel_error_rpc.o 00:07:11.599 CC module/accel/dsa/accel_dsa.o 00:07:11.599 CC module/accel/dsa/accel_dsa_rpc.o 00:07:11.599 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:11.599 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:11.599 CC module/sock/posix/posix.o 00:07:11.599 CC module/fsdev/aio/linux_aio_mgr.o 00:07:11.599 CC module/fsdev/aio/fsdev_aio.o 00:07:11.599 CC module/keyring/linux/keyring.o 00:07:11.599 CC module/keyring/linux/keyring_rpc.o 00:07:11.600 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:11.600 CC module/accel/ioat/accel_ioat.o 00:07:11.600 CC module/accel/ioat/accel_ioat_rpc.o 00:07:11.600 CC module/scheduler/gscheduler/gscheduler.o 00:07:11.600 CC module/keyring/file/keyring.o 00:07:11.600 CC module/keyring/file/keyring_rpc.o 00:07:11.600 LIB libspdk_env_dpdk_rpc.a 00:07:11.600 SO libspdk_env_dpdk_rpc.so.6.0 00:07:11.600 SYMLINK libspdk_env_dpdk_rpc.so 00:07:11.860 LIB libspdk_keyring_linux.a 00:07:11.860 LIB libspdk_scheduler_gscheduler.a 00:07:11.860 LIB libspdk_scheduler_dpdk_governor.a 00:07:11.860 SO libspdk_scheduler_gscheduler.so.4.0 00:07:11.860 SO libspdk_keyring_linux.so.1.0 00:07:11.860 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:11.860 LIB libspdk_scheduler_dynamic.a 00:07:11.860 LIB libspdk_accel_ioat.a 00:07:11.860 LIB libspdk_accel_iaa.a 00:07:11.860 SO libspdk_scheduler_dynamic.so.4.0 00:07:11.860 LIB libspdk_keyring_file.a 00:07:11.860 SYMLINK libspdk_scheduler_gscheduler.so 00:07:11.860 SO libspdk_accel_ioat.so.6.0 00:07:11.860 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:11.860 SYMLINK libspdk_keyring_linux.so 00:07:11.860 SO libspdk_keyring_file.so.2.0 00:07:11.860 SO libspdk_accel_iaa.so.3.0 00:07:11.860 SYMLINK libspdk_scheduler_dynamic.so 00:07:11.860 SYMLINK libspdk_accel_ioat.so 00:07:11.860 LIB libspdk_blob_bdev.a 00:07:11.860 LIB libspdk_accel_dsa.a 00:07:11.860 SYMLINK libspdk_accel_iaa.so 00:07:11.860 LIB libspdk_accel_error.a 00:07:11.860 SO libspdk_blob_bdev.so.12.0 00:07:11.860 SYMLINK libspdk_keyring_file.so 00:07:11.860 SO libspdk_accel_dsa.so.5.0 00:07:11.860 SO libspdk_accel_error.so.2.0 00:07:11.860 SYMLINK libspdk_accel_dsa.so 00:07:11.860 SYMLINK libspdk_accel_error.so 00:07:12.119 SYMLINK libspdk_blob_bdev.so 00:07:12.119 LIB libspdk_vfu_device.a 00:07:12.119 SO libspdk_vfu_device.so.3.0 00:07:12.382 SYMLINK libspdk_vfu_device.so 00:07:12.382 CC module/blobfs/bdev/blobfs_bdev.o 00:07:12.382 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:12.382 CC module/bdev/gpt/gpt.o 00:07:12.382 CC module/bdev/gpt/vbdev_gpt.o 00:07:12.382 CC module/bdev/malloc/bdev_malloc.o 00:07:12.382 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:12.382 CC module/bdev/passthru/vbdev_passthru.o 00:07:12.382 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:12.382 CC module/bdev/null/bdev_null.o 00:07:12.382 CC module/bdev/error/vbdev_error.o 00:07:12.382 CC module/bdev/split/vbdev_split.o 00:07:12.382 CC module/bdev/error/vbdev_error_rpc.o 00:07:12.382 CC module/bdev/null/bdev_null_rpc.o 00:07:12.382 CC module/bdev/split/vbdev_split_rpc.o 00:07:12.382 CC module/bdev/lvol/vbdev_lvol.o 00:07:12.382 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:12.382 CC module/bdev/ftl/bdev_ftl.o 00:07:12.382 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:12.382 CC module/bdev/delay/vbdev_delay.o 00:07:12.382 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:12.382 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:12.382 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:12.382 CC module/bdev/aio/bdev_aio.o 00:07:12.382 CC module/bdev/aio/bdev_aio_rpc.o 00:07:12.382 CC module/bdev/iscsi/bdev_iscsi.o 00:07:12.382 CC module/bdev/nvme/bdev_nvme.o 00:07:12.382 CC module/bdev/nvme/nvme_rpc.o 00:07:12.382 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:12.382 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:12.382 CC module/bdev/nvme/bdev_mdns_client.o 00:07:12.382 CC module/bdev/nvme/vbdev_opal.o 00:07:12.382 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:12.382 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:12.382 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:12.382 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:12.382 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:12.382 CC module/bdev/raid/bdev_raid.o 00:07:12.382 CC module/bdev/raid/bdev_raid_rpc.o 00:07:12.382 CC module/bdev/raid/bdev_raid_sb.o 00:07:12.382 CC module/bdev/raid/raid0.o 00:07:12.382 CC module/bdev/raid/raid1.o 00:07:12.382 CC module/bdev/raid/concat.o 00:07:12.640 LIB libspdk_sock_posix.a 00:07:12.640 SO libspdk_sock_posix.so.6.0 00:07:12.640 LIB libspdk_blobfs_bdev.a 00:07:12.640 SYMLINK libspdk_sock_posix.so 00:07:12.640 LIB libspdk_fsdev_aio.a 00:07:12.640 SO libspdk_blobfs_bdev.so.6.0 00:07:12.900 LIB libspdk_bdev_split.a 00:07:12.900 SO libspdk_fsdev_aio.so.1.0 00:07:12.900 SYMLINK libspdk_blobfs_bdev.so 00:07:12.900 LIB libspdk_bdev_gpt.a 00:07:12.900 SO libspdk_bdev_split.so.6.0 00:07:12.900 LIB libspdk_bdev_null.a 00:07:12.900 SO libspdk_bdev_gpt.so.6.0 00:07:12.900 LIB libspdk_bdev_error.a 00:07:12.900 SO libspdk_bdev_null.so.6.0 00:07:12.900 LIB libspdk_bdev_ftl.a 00:07:12.900 SYMLINK libspdk_fsdev_aio.so 00:07:12.900 SO libspdk_bdev_error.so.6.0 00:07:12.900 SYMLINK libspdk_bdev_split.so 00:07:12.900 SO libspdk_bdev_ftl.so.6.0 00:07:12.900 LIB libspdk_bdev_aio.a 00:07:12.900 LIB libspdk_bdev_passthru.a 00:07:12.900 SYMLINK libspdk_bdev_gpt.so 00:07:12.900 LIB libspdk_bdev_zone_block.a 00:07:12.900 SO libspdk_bdev_passthru.so.6.0 00:07:12.900 SO libspdk_bdev_aio.so.6.0 00:07:12.900 SYMLINK libspdk_bdev_null.so 00:07:12.900 SYMLINK libspdk_bdev_error.so 00:07:12.900 SO libspdk_bdev_zone_block.so.6.0 00:07:12.900 SYMLINK libspdk_bdev_ftl.so 00:07:12.900 SYMLINK libspdk_bdev_passthru.so 00:07:12.900 LIB libspdk_bdev_delay.a 00:07:12.900 LIB libspdk_bdev_malloc.a 00:07:12.900 SYMLINK libspdk_bdev_aio.so 00:07:12.900 LIB libspdk_bdev_iscsi.a 00:07:12.900 SO libspdk_bdev_delay.so.6.0 00:07:12.900 SO libspdk_bdev_malloc.so.6.0 00:07:13.160 SYMLINK libspdk_bdev_zone_block.so 00:07:13.160 SO libspdk_bdev_iscsi.so.6.0 00:07:13.160 SYMLINK libspdk_bdev_iscsi.so 00:07:13.160 SYMLINK libspdk_bdev_delay.so 00:07:13.160 SYMLINK libspdk_bdev_malloc.so 00:07:13.160 LIB libspdk_bdev_virtio.a 00:07:13.160 SO libspdk_bdev_virtio.so.6.0 00:07:13.420 LIB libspdk_bdev_lvol.a 00:07:13.420 SO libspdk_bdev_lvol.so.6.0 00:07:13.420 SYMLINK libspdk_bdev_virtio.so 00:07:13.420 SYMLINK libspdk_bdev_lvol.so 00:07:13.679 LIB libspdk_bdev_raid.a 00:07:13.998 SO libspdk_bdev_raid.so.6.0 00:07:13.998 SYMLINK libspdk_bdev_raid.so 00:07:15.907 LIB libspdk_bdev_nvme.a 00:07:16.167 SO libspdk_bdev_nvme.so.7.1 00:07:16.426 SYMLINK libspdk_bdev_nvme.so 00:07:16.996 CC module/event/subsystems/scheduler/scheduler.o 00:07:16.996 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:16.996 CC module/event/subsystems/keyring/keyring.o 00:07:16.996 CC module/event/subsystems/vmd/vmd.o 00:07:16.996 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:16.996 CC module/event/subsystems/fsdev/fsdev.o 00:07:16.996 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:16.996 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:16.996 CC module/event/subsystems/iobuf/iobuf.o 00:07:16.996 CC module/event/subsystems/sock/sock.o 00:07:16.996 LIB libspdk_event_keyring.a 00:07:16.996 LIB libspdk_event_vfu_tgt.a 00:07:16.996 LIB libspdk_event_vmd.a 00:07:16.996 LIB libspdk_event_vhost_blk.a 00:07:16.996 LIB libspdk_event_scheduler.a 00:07:16.996 LIB libspdk_event_fsdev.a 00:07:16.996 LIB libspdk_event_sock.a 00:07:16.996 SO libspdk_event_vfu_tgt.so.3.0 00:07:17.257 SO libspdk_event_keyring.so.1.0 00:07:17.257 SO libspdk_event_vhost_blk.so.3.0 00:07:17.257 SO libspdk_event_vmd.so.6.0 00:07:17.257 SO libspdk_event_scheduler.so.4.0 00:07:17.257 LIB libspdk_event_iobuf.a 00:07:17.257 SO libspdk_event_sock.so.5.0 00:07:17.257 SO libspdk_event_fsdev.so.1.0 00:07:17.257 SO libspdk_event_iobuf.so.3.0 00:07:17.257 SYMLINK libspdk_event_keyring.so 00:07:17.257 SYMLINK libspdk_event_vhost_blk.so 00:07:17.257 SYMLINK libspdk_event_scheduler.so 00:07:17.257 SYMLINK libspdk_event_vmd.so 00:07:17.257 SYMLINK libspdk_event_vfu_tgt.so 00:07:17.257 SYMLINK libspdk_event_fsdev.so 00:07:17.257 SYMLINK libspdk_event_sock.so 00:07:17.257 SYMLINK libspdk_event_iobuf.so 00:07:17.517 CC module/event/subsystems/accel/accel.o 00:07:17.777 LIB libspdk_event_accel.a 00:07:17.777 SO libspdk_event_accel.so.6.0 00:07:18.099 SYMLINK libspdk_event_accel.so 00:07:18.359 CC module/event/subsystems/bdev/bdev.o 00:07:18.619 LIB libspdk_event_bdev.a 00:07:18.619 SO libspdk_event_bdev.so.6.0 00:07:18.878 SYMLINK libspdk_event_bdev.so 00:07:19.139 CC module/event/subsystems/nbd/nbd.o 00:07:19.139 CC module/event/subsystems/scsi/scsi.o 00:07:19.139 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:19.139 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:19.139 CC module/event/subsystems/ublk/ublk.o 00:07:19.400 LIB libspdk_event_nbd.a 00:07:19.400 SO libspdk_event_nbd.so.6.0 00:07:19.400 LIB libspdk_event_scsi.a 00:07:19.400 LIB libspdk_event_ublk.a 00:07:19.400 SO libspdk_event_scsi.so.6.0 00:07:19.400 SO libspdk_event_ublk.so.3.0 00:07:19.400 SYMLINK libspdk_event_nbd.so 00:07:19.659 SYMLINK libspdk_event_scsi.so 00:07:19.659 SYMLINK libspdk_event_ublk.so 00:07:19.659 LIB libspdk_event_nvmf.a 00:07:19.659 SO libspdk_event_nvmf.so.6.0 00:07:19.919 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:19.919 SYMLINK libspdk_event_nvmf.so 00:07:19.919 CC module/event/subsystems/iscsi/iscsi.o 00:07:19.919 LIB libspdk_event_vhost_scsi.a 00:07:19.919 SO libspdk_event_vhost_scsi.so.3.0 00:07:20.178 LIB libspdk_event_iscsi.a 00:07:20.178 SO libspdk_event_iscsi.so.6.0 00:07:20.178 SYMLINK libspdk_event_vhost_scsi.so 00:07:20.178 SYMLINK libspdk_event_iscsi.so 00:07:20.436 SO libspdk.so.6.0 00:07:20.436 SYMLINK libspdk.so 00:07:20.699 CXX app/trace/trace.o 00:07:20.699 CC app/trace_record/trace_record.o 00:07:20.699 CC app/spdk_nvme_identify/identify.o 00:07:20.699 CC app/spdk_nvme_perf/perf.o 00:07:20.699 CC app/spdk_lspci/spdk_lspci.o 00:07:20.699 CC app/spdk_top/spdk_top.o 00:07:20.699 CC app/spdk_nvme_discover/discovery_aer.o 00:07:20.699 TEST_HEADER include/spdk/accel.h 00:07:20.699 TEST_HEADER include/spdk/accel_module.h 00:07:20.699 TEST_HEADER include/spdk/assert.h 00:07:20.699 TEST_HEADER include/spdk/barrier.h 00:07:20.699 TEST_HEADER include/spdk/base64.h 00:07:20.699 TEST_HEADER include/spdk/bdev.h 00:07:20.699 CC test/rpc_client/rpc_client_test.o 00:07:20.699 TEST_HEADER include/spdk/bdev_module.h 00:07:20.699 TEST_HEADER include/spdk/bdev_zone.h 00:07:20.699 TEST_HEADER include/spdk/bit_array.h 00:07:20.699 TEST_HEADER include/spdk/bit_pool.h 00:07:20.699 TEST_HEADER include/spdk/blob_bdev.h 00:07:20.699 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:20.699 TEST_HEADER include/spdk/blobfs.h 00:07:20.699 TEST_HEADER include/spdk/blob.h 00:07:20.699 TEST_HEADER include/spdk/conf.h 00:07:20.699 TEST_HEADER include/spdk/config.h 00:07:20.699 TEST_HEADER include/spdk/cpuset.h 00:07:20.699 TEST_HEADER include/spdk/crc16.h 00:07:20.699 TEST_HEADER include/spdk/crc64.h 00:07:20.699 TEST_HEADER include/spdk/crc32.h 00:07:20.699 TEST_HEADER include/spdk/dif.h 00:07:20.699 TEST_HEADER include/spdk/dma.h 00:07:20.699 TEST_HEADER include/spdk/endian.h 00:07:20.699 TEST_HEADER include/spdk/env_dpdk.h 00:07:20.699 TEST_HEADER include/spdk/env.h 00:07:20.699 TEST_HEADER include/spdk/event.h 00:07:20.699 TEST_HEADER include/spdk/fd_group.h 00:07:20.699 TEST_HEADER include/spdk/fd.h 00:07:20.699 TEST_HEADER include/spdk/file.h 00:07:20.699 TEST_HEADER include/spdk/fsdev_module.h 00:07:20.699 TEST_HEADER include/spdk/fsdev.h 00:07:20.699 TEST_HEADER include/spdk/ftl.h 00:07:20.699 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:20.699 TEST_HEADER include/spdk/gpt_spec.h 00:07:20.699 TEST_HEADER include/spdk/histogram_data.h 00:07:20.699 TEST_HEADER include/spdk/hexlify.h 00:07:20.699 TEST_HEADER include/spdk/idxd.h 00:07:20.699 TEST_HEADER include/spdk/idxd_spec.h 00:07:20.699 TEST_HEADER include/spdk/init.h 00:07:20.699 TEST_HEADER include/spdk/ioat.h 00:07:20.699 TEST_HEADER include/spdk/ioat_spec.h 00:07:20.699 TEST_HEADER include/spdk/json.h 00:07:20.699 TEST_HEADER include/spdk/iscsi_spec.h 00:07:20.699 TEST_HEADER include/spdk/jsonrpc.h 00:07:20.699 TEST_HEADER include/spdk/keyring_module.h 00:07:20.699 TEST_HEADER include/spdk/keyring.h 00:07:20.699 TEST_HEADER include/spdk/likely.h 00:07:20.699 TEST_HEADER include/spdk/log.h 00:07:20.699 TEST_HEADER include/spdk/lvol.h 00:07:20.699 TEST_HEADER include/spdk/md5.h 00:07:20.699 TEST_HEADER include/spdk/memory.h 00:07:20.699 TEST_HEADER include/spdk/mmio.h 00:07:20.699 TEST_HEADER include/spdk/nbd.h 00:07:20.699 TEST_HEADER include/spdk/net.h 00:07:20.699 TEST_HEADER include/spdk/notify.h 00:07:20.699 TEST_HEADER include/spdk/nvme.h 00:07:20.699 TEST_HEADER include/spdk/nvme_intel.h 00:07:20.699 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:20.699 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:20.699 TEST_HEADER include/spdk/nvme_spec.h 00:07:20.699 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:20.699 TEST_HEADER include/spdk/nvme_zns.h 00:07:20.699 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:20.699 TEST_HEADER include/spdk/nvmf.h 00:07:20.699 TEST_HEADER include/spdk/nvmf_spec.h 00:07:20.699 TEST_HEADER include/spdk/nvmf_transport.h 00:07:20.699 TEST_HEADER include/spdk/opal.h 00:07:20.699 TEST_HEADER include/spdk/opal_spec.h 00:07:20.699 TEST_HEADER include/spdk/pci_ids.h 00:07:20.699 TEST_HEADER include/spdk/queue.h 00:07:20.699 TEST_HEADER include/spdk/pipe.h 00:07:20.699 TEST_HEADER include/spdk/reduce.h 00:07:20.699 TEST_HEADER include/spdk/rpc.h 00:07:20.699 TEST_HEADER include/spdk/scheduler.h 00:07:20.699 TEST_HEADER include/spdk/scsi.h 00:07:20.699 TEST_HEADER include/spdk/scsi_spec.h 00:07:20.699 TEST_HEADER include/spdk/sock.h 00:07:20.699 TEST_HEADER include/spdk/stdinc.h 00:07:20.699 TEST_HEADER include/spdk/string.h 00:07:20.699 TEST_HEADER include/spdk/trace.h 00:07:20.699 TEST_HEADER include/spdk/thread.h 00:07:20.699 TEST_HEADER include/spdk/trace_parser.h 00:07:20.699 TEST_HEADER include/spdk/tree.h 00:07:20.699 TEST_HEADER include/spdk/ublk.h 00:07:20.699 TEST_HEADER include/spdk/util.h 00:07:20.699 TEST_HEADER include/spdk/uuid.h 00:07:20.699 TEST_HEADER include/spdk/version.h 00:07:20.699 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:20.699 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:20.699 TEST_HEADER include/spdk/vhost.h 00:07:20.699 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:20.699 TEST_HEADER include/spdk/vmd.h 00:07:20.699 TEST_HEADER include/spdk/xor.h 00:07:20.699 TEST_HEADER include/spdk/zipf.h 00:07:20.699 CXX test/cpp_headers/accel.o 00:07:20.699 CXX test/cpp_headers/accel_module.o 00:07:20.699 CXX test/cpp_headers/barrier.o 00:07:20.699 CXX test/cpp_headers/assert.o 00:07:20.699 CXX test/cpp_headers/base64.o 00:07:20.699 CXX test/cpp_headers/bdev.o 00:07:20.699 CXX test/cpp_headers/bdev_module.o 00:07:20.699 CC app/iscsi_tgt/iscsi_tgt.o 00:07:20.699 CXX test/cpp_headers/bdev_zone.o 00:07:20.699 CC app/nvmf_tgt/nvmf_main.o 00:07:20.699 CXX test/cpp_headers/bit_array.o 00:07:20.699 CXX test/cpp_headers/bit_pool.o 00:07:20.699 CXX test/cpp_headers/blob_bdev.o 00:07:20.699 CXX test/cpp_headers/blobfs_bdev.o 00:07:20.699 CXX test/cpp_headers/blobfs.o 00:07:20.699 CXX test/cpp_headers/blob.o 00:07:20.699 CXX test/cpp_headers/conf.o 00:07:20.699 CXX test/cpp_headers/config.o 00:07:20.699 CXX test/cpp_headers/cpuset.o 00:07:20.699 CXX test/cpp_headers/crc16.o 00:07:20.699 CC app/spdk_dd/spdk_dd.o 00:07:20.699 CXX test/cpp_headers/crc32.o 00:07:20.699 CC app/spdk_tgt/spdk_tgt.o 00:07:20.699 CC examples/ioat/perf/perf.o 00:07:20.699 CC examples/ioat/verify/verify.o 00:07:20.699 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:20.699 CC test/env/memory/memory_ut.o 00:07:20.699 CC app/fio/nvme/fio_plugin.o 00:07:20.699 CC examples/util/zipf/zipf.o 00:07:20.699 CC test/app/histogram_perf/histogram_perf.o 00:07:20.699 CC test/app/stub/stub.o 00:07:20.699 CC test/app/jsoncat/jsoncat.o 00:07:20.699 CC test/env/pci/pci_ut.o 00:07:20.699 CC test/thread/poller_perf/poller_perf.o 00:07:20.699 CC test/env/vtophys/vtophys.o 00:07:20.962 CC test/dma/test_dma/test_dma.o 00:07:20.962 CC app/fio/bdev/fio_plugin.o 00:07:20.962 CC test/app/bdev_svc/bdev_svc.o 00:07:20.962 LINK spdk_lspci 00:07:20.962 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:20.962 CC test/env/mem_callbacks/mem_callbacks.o 00:07:21.226 LINK rpc_client_test 00:07:21.226 LINK spdk_nvme_discover 00:07:21.226 LINK interrupt_tgt 00:07:21.226 LINK histogram_perf 00:07:21.226 LINK nvmf_tgt 00:07:21.226 LINK jsoncat 00:07:21.226 LINK poller_perf 00:07:21.226 LINK vtophys 00:07:21.226 LINK zipf 00:07:21.226 LINK env_dpdk_post_init 00:07:21.226 CXX test/cpp_headers/crc64.o 00:07:21.226 CXX test/cpp_headers/dif.o 00:07:21.226 CXX test/cpp_headers/dma.o 00:07:21.226 CXX test/cpp_headers/endian.o 00:07:21.226 CXX test/cpp_headers/env_dpdk.o 00:07:21.226 CXX test/cpp_headers/env.o 00:07:21.226 CXX test/cpp_headers/event.o 00:07:21.226 LINK stub 00:07:21.226 CXX test/cpp_headers/fd_group.o 00:07:21.226 CXX test/cpp_headers/fd.o 00:07:21.226 CXX test/cpp_headers/file.o 00:07:21.226 LINK spdk_trace_record 00:07:21.226 LINK iscsi_tgt 00:07:21.226 CXX test/cpp_headers/fsdev.o 00:07:21.226 CXX test/cpp_headers/fsdev_module.o 00:07:21.226 CXX test/cpp_headers/ftl.o 00:07:21.226 LINK ioat_perf 00:07:21.226 CXX test/cpp_headers/fuse_dispatcher.o 00:07:21.226 LINK verify 00:07:21.226 CXX test/cpp_headers/gpt_spec.o 00:07:21.226 LINK spdk_tgt 00:07:21.226 CXX test/cpp_headers/hexlify.o 00:07:21.226 CXX test/cpp_headers/histogram_data.o 00:07:21.226 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:21.226 LINK bdev_svc 00:07:21.487 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:21.487 CXX test/cpp_headers/idxd.o 00:07:21.487 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:21.487 CXX test/cpp_headers/idxd_spec.o 00:07:21.487 CXX test/cpp_headers/init.o 00:07:21.487 CXX test/cpp_headers/ioat.o 00:07:21.487 CXX test/cpp_headers/ioat_spec.o 00:07:21.487 LINK spdk_dd 00:07:21.487 CXX test/cpp_headers/iscsi_spec.o 00:07:21.487 LINK spdk_trace 00:07:21.487 CXX test/cpp_headers/json.o 00:07:21.487 CXX test/cpp_headers/jsonrpc.o 00:07:21.487 CXX test/cpp_headers/keyring.o 00:07:21.487 CXX test/cpp_headers/keyring_module.o 00:07:21.751 CXX test/cpp_headers/likely.o 00:07:21.751 CXX test/cpp_headers/log.o 00:07:21.751 CXX test/cpp_headers/lvol.o 00:07:21.751 CXX test/cpp_headers/md5.o 00:07:21.751 CXX test/cpp_headers/memory.o 00:07:21.751 CXX test/cpp_headers/mmio.o 00:07:21.751 CXX test/cpp_headers/nbd.o 00:07:21.751 CXX test/cpp_headers/net.o 00:07:21.751 LINK pci_ut 00:07:21.751 CXX test/cpp_headers/notify.o 00:07:21.751 CXX test/cpp_headers/nvme.o 00:07:21.751 CXX test/cpp_headers/nvme_intel.o 00:07:21.751 CXX test/cpp_headers/nvme_ocssd.o 00:07:21.751 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:21.751 CXX test/cpp_headers/nvme_spec.o 00:07:21.751 CXX test/cpp_headers/nvme_zns.o 00:07:21.751 LINK nvme_fuzz 00:07:21.751 CXX test/cpp_headers/nvmf_cmd.o 00:07:21.751 CC test/event/event_perf/event_perf.o 00:07:21.751 CC test/event/reactor/reactor.o 00:07:21.751 CC test/event/reactor_perf/reactor_perf.o 00:07:22.010 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:22.010 CC examples/sock/hello_world/hello_sock.o 00:07:22.010 CXX test/cpp_headers/nvmf.o 00:07:22.010 CC test/event/app_repeat/app_repeat.o 00:07:22.010 CC examples/vmd/lsvmd/lsvmd.o 00:07:22.010 CC examples/idxd/perf/perf.o 00:07:22.010 CXX test/cpp_headers/nvmf_spec.o 00:07:22.010 CXX test/cpp_headers/nvmf_transport.o 00:07:22.010 LINK test_dma 00:07:22.010 LINK spdk_nvme 00:07:22.010 CC examples/vmd/led/led.o 00:07:22.010 CXX test/cpp_headers/opal.o 00:07:22.010 LINK spdk_bdev 00:07:22.010 CC examples/thread/thread/thread_ex.o 00:07:22.010 CC test/event/scheduler/scheduler.o 00:07:22.010 CXX test/cpp_headers/opal_spec.o 00:07:22.010 CXX test/cpp_headers/pci_ids.o 00:07:22.010 CXX test/cpp_headers/pipe.o 00:07:22.010 CXX test/cpp_headers/queue.o 00:07:22.010 CXX test/cpp_headers/reduce.o 00:07:22.010 CXX test/cpp_headers/rpc.o 00:07:22.010 CXX test/cpp_headers/scheduler.o 00:07:22.010 CXX test/cpp_headers/scsi.o 00:07:22.010 CXX test/cpp_headers/scsi_spec.o 00:07:22.010 CXX test/cpp_headers/sock.o 00:07:22.010 CXX test/cpp_headers/stdinc.o 00:07:22.010 CXX test/cpp_headers/string.o 00:07:22.010 CXX test/cpp_headers/thread.o 00:07:22.010 CXX test/cpp_headers/trace.o 00:07:22.010 CXX test/cpp_headers/trace_parser.o 00:07:22.291 CXX test/cpp_headers/tree.o 00:07:22.291 CXX test/cpp_headers/ublk.o 00:07:22.291 CXX test/cpp_headers/util.o 00:07:22.291 LINK reactor 00:07:22.291 CXX test/cpp_headers/uuid.o 00:07:22.291 CXX test/cpp_headers/version.o 00:07:22.291 CC app/vhost/vhost.o 00:07:22.291 LINK reactor_perf 00:07:22.291 LINK event_perf 00:07:22.291 CXX test/cpp_headers/vfio_user_pci.o 00:07:22.291 CXX test/cpp_headers/vfio_user_spec.o 00:07:22.291 CXX test/cpp_headers/vhost.o 00:07:22.291 LINK lsvmd 00:07:22.291 CXX test/cpp_headers/vmd.o 00:07:22.291 LINK mem_callbacks 00:07:22.291 CXX test/cpp_headers/xor.o 00:07:22.291 LINK app_repeat 00:07:22.291 CXX test/cpp_headers/zipf.o 00:07:22.291 LINK led 00:07:22.291 LINK vhost_fuzz 00:07:22.291 LINK spdk_nvme_perf 00:07:22.291 LINK spdk_nvme_identify 00:07:22.291 LINK hello_sock 00:07:22.584 LINK spdk_top 00:07:22.584 LINK scheduler 00:07:22.584 LINK thread 00:07:22.584 LINK vhost 00:07:22.584 CC test/nvme/aer/aer.o 00:07:22.584 CC test/nvme/sgl/sgl.o 00:07:22.584 CC test/nvme/reset/reset.o 00:07:22.584 CC test/nvme/e2edp/nvme_dp.o 00:07:22.584 CC test/nvme/startup/startup.o 00:07:22.584 CC test/nvme/connect_stress/connect_stress.o 00:07:22.584 CC test/nvme/err_injection/err_injection.o 00:07:22.584 CC test/nvme/compliance/nvme_compliance.o 00:07:22.584 CC test/nvme/overhead/overhead.o 00:07:22.584 CC test/nvme/simple_copy/simple_copy.o 00:07:22.584 CC test/nvme/fused_ordering/fused_ordering.o 00:07:22.584 CC test/nvme/reserve/reserve.o 00:07:22.584 CC test/nvme/boot_partition/boot_partition.o 00:07:22.584 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:22.584 CC test/nvme/cuse/cuse.o 00:07:22.584 CC test/nvme/fdp/fdp.o 00:07:22.584 LINK idxd_perf 00:07:22.584 CC test/accel/dif/dif.o 00:07:22.584 CC test/blobfs/mkfs/mkfs.o 00:07:22.879 CC test/lvol/esnap/esnap.o 00:07:22.879 LINK reserve 00:07:22.879 LINK err_injection 00:07:22.879 CC examples/nvme/arbitration/arbitration.o 00:07:22.879 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:22.879 LINK connect_stress 00:07:22.879 LINK fused_ordering 00:07:22.879 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:22.879 CC examples/nvme/abort/abort.o 00:07:22.879 CC examples/nvme/hotplug/hotplug.o 00:07:22.879 CC examples/nvme/hello_world/hello_world.o 00:07:22.879 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:22.879 CC examples/nvme/reconnect/reconnect.o 00:07:22.879 LINK boot_partition 00:07:22.879 LINK sgl 00:07:23.138 LINK mkfs 00:07:23.138 LINK startup 00:07:23.138 LINK reset 00:07:23.138 LINK doorbell_aers 00:07:23.138 LINK overhead 00:07:23.138 LINK memory_ut 00:07:23.138 LINK nvme_dp 00:07:23.138 CC examples/accel/perf/accel_perf.o 00:07:23.138 LINK simple_copy 00:07:23.138 LINK fdp 00:07:23.138 LINK aer 00:07:23.138 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:23.138 CC examples/blob/cli/blobcli.o 00:07:23.138 LINK nvme_compliance 00:07:23.138 CC examples/blob/hello_world/hello_blob.o 00:07:23.138 LINK pmr_persistence 00:07:23.138 LINK cmb_copy 00:07:23.396 LINK hotplug 00:07:23.396 LINK reconnect 00:07:23.396 LINK hello_world 00:07:23.396 LINK abort 00:07:23.396 LINK arbitration 00:07:23.396 LINK hello_blob 00:07:23.654 LINK hello_fsdev 00:07:23.654 LINK dif 00:07:23.654 LINK nvme_manage 00:07:23.654 LINK accel_perf 00:07:23.654 LINK blobcli 00:07:24.220 CC test/bdev/bdevio/bdevio.o 00:07:24.220 CC examples/bdev/hello_world/hello_bdev.o 00:07:24.220 CC examples/bdev/bdevperf/bdevperf.o 00:07:24.220 LINK iscsi_fuzz 00:07:24.478 LINK hello_bdev 00:07:24.478 LINK cuse 00:07:24.737 LINK bdevio 00:07:25.304 LINK bdevperf 00:07:25.871 CC examples/nvmf/nvmf/nvmf.o 00:07:26.438 LINK nvmf 00:07:30.630 LINK esnap 00:07:30.889 00:07:30.889 real 1m46.832s 00:07:30.889 user 13m37.902s 00:07:30.889 sys 2m50.842s 00:07:30.889 10:18:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:30.889 10:18:15 make -- common/autotest_common.sh@10 -- $ set +x 00:07:30.889 ************************************ 00:07:30.889 END TEST make 00:07:30.889 ************************************ 00:07:30.889 10:18:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:30.889 10:18:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:30.889 10:18:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:30.889 10:18:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.889 10:18:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:30.889 10:18:15 -- pm/common@44 -- $ pid=1876184 00:07:30.889 10:18:15 -- pm/common@50 -- $ kill -TERM 1876184 00:07:30.889 10:18:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.889 10:18:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:30.889 10:18:15 -- pm/common@44 -- $ pid=1876186 00:07:30.889 10:18:15 -- pm/common@50 -- $ kill -TERM 1876186 00:07:30.889 10:18:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.889 10:18:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:30.889 10:18:15 -- pm/common@44 -- $ pid=1876188 00:07:30.889 10:18:15 -- pm/common@50 -- $ kill -TERM 1876188 00:07:30.889 10:18:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:30.889 10:18:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:30.889 10:18:15 -- pm/common@44 -- $ pid=1876215 00:07:30.889 10:18:15 -- pm/common@50 -- $ sudo -E kill -TERM 1876215 00:07:30.889 10:18:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:30.889 10:18:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:31.149 10:18:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:31.149 10:18:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:31.149 10:18:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:31.409 10:18:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:31.409 10:18:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.409 10:18:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.409 10:18:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.409 10:18:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.409 10:18:15 -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.409 10:18:15 -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.409 10:18:15 -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.409 10:18:15 -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.409 10:18:15 -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.409 10:18:15 -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.409 10:18:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.409 10:18:15 -- scripts/common.sh@344 -- # case "$op" in 00:07:31.409 10:18:15 -- scripts/common.sh@345 -- # : 1 00:07:31.409 10:18:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.409 10:18:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.409 10:18:15 -- scripts/common.sh@365 -- # decimal 1 00:07:31.409 10:18:15 -- scripts/common.sh@353 -- # local d=1 00:07:31.409 10:18:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.409 10:18:15 -- scripts/common.sh@355 -- # echo 1 00:07:31.409 10:18:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.409 10:18:15 -- scripts/common.sh@366 -- # decimal 2 00:07:31.409 10:18:15 -- scripts/common.sh@353 -- # local d=2 00:07:31.409 10:18:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.409 10:18:15 -- scripts/common.sh@355 -- # echo 2 00:07:31.409 10:18:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.409 10:18:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.409 10:18:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.409 10:18:15 -- scripts/common.sh@368 -- # return 0 00:07:31.409 10:18:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.409 10:18:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.409 --rc genhtml_branch_coverage=1 00:07:31.409 --rc genhtml_function_coverage=1 00:07:31.409 --rc genhtml_legend=1 00:07:31.409 --rc geninfo_all_blocks=1 00:07:31.409 --rc geninfo_unexecuted_blocks=1 00:07:31.409 00:07:31.409 ' 00:07:31.409 10:18:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.409 --rc genhtml_branch_coverage=1 00:07:31.409 --rc genhtml_function_coverage=1 00:07:31.409 --rc genhtml_legend=1 00:07:31.409 --rc geninfo_all_blocks=1 00:07:31.409 --rc geninfo_unexecuted_blocks=1 00:07:31.409 00:07:31.409 ' 00:07:31.409 10:18:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.409 --rc genhtml_branch_coverage=1 00:07:31.409 --rc genhtml_function_coverage=1 00:07:31.409 --rc genhtml_legend=1 00:07:31.409 --rc geninfo_all_blocks=1 00:07:31.409 --rc geninfo_unexecuted_blocks=1 00:07:31.409 00:07:31.409 ' 00:07:31.409 10:18:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.409 --rc genhtml_branch_coverage=1 00:07:31.409 --rc genhtml_function_coverage=1 00:07:31.409 --rc genhtml_legend=1 00:07:31.409 --rc geninfo_all_blocks=1 00:07:31.409 --rc geninfo_unexecuted_blocks=1 00:07:31.409 00:07:31.409 ' 00:07:31.409 10:18:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.409 10:18:15 -- nvmf/common.sh@7 -- # uname -s 00:07:31.409 10:18:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.409 10:18:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.409 10:18:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.409 10:18:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.409 10:18:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.409 10:18:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.409 10:18:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.409 10:18:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.409 10:18:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.409 10:18:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.409 10:18:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:31.409 10:18:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:31.409 10:18:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.409 10:18:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.409 10:18:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.409 10:18:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.409 10:18:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.409 10:18:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.409 10:18:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.409 10:18:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.409 10:18:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.409 10:18:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.409 10:18:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.409 10:18:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.409 10:18:15 -- paths/export.sh@5 -- # export PATH 00:07:31.409 10:18:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.409 10:18:15 -- nvmf/common.sh@51 -- # : 0 00:07:31.409 10:18:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.409 10:18:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.409 10:18:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.409 10:18:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.409 10:18:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.409 10:18:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.409 10:18:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.409 10:18:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.409 10:18:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.409 10:18:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:31.409 10:18:15 -- spdk/autotest.sh@32 -- # uname -s 00:07:31.409 10:18:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:31.409 10:18:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:31.409 10:18:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:31.409 10:18:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:31.409 10:18:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:31.409 10:18:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:31.409 10:18:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:31.409 10:18:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:31.409 10:18:15 -- spdk/autotest.sh@48 -- # udevadm_pid=1941524 00:07:31.409 10:18:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:31.409 10:18:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:31.409 10:18:15 -- pm/common@17 -- # local monitor 00:07:31.409 10:18:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:31.409 10:18:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:31.409 10:18:15 -- pm/common@21 -- # date +%s 00:07:31.410 10:18:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:31.410 10:18:15 -- pm/common@21 -- # date +%s 00:07:31.410 10:18:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:31.410 10:18:15 -- pm/common@25 -- # sleep 1 00:07:31.410 10:18:15 -- pm/common@21 -- # date +%s 00:07:31.410 10:18:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735895 00:07:31.410 10:18:15 -- pm/common@21 -- # date +%s 00:07:31.410 10:18:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735895 00:07:31.410 10:18:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735895 00:07:31.410 10:18:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735895 00:07:31.410 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735895_collect-cpu-load.pm.log 00:07:31.410 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735895_collect-cpu-temp.pm.log 00:07:31.410 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735895_collect-vmstat.pm.log 00:07:31.410 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735895_collect-bmc-pm.bmc.pm.log 00:07:32.348 10:18:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:32.348 10:18:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:32.348 10:18:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.348 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:32.348 10:18:16 -- spdk/autotest.sh@59 -- # create_test_list 00:07:32.348 10:18:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:32.348 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:32.348 10:18:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:32.348 10:18:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.348 10:18:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.348 10:18:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:32.348 10:18:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:32.348 10:18:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:32.348 10:18:16 -- common/autotest_common.sh@1457 -- # uname 00:07:32.348 10:18:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:32.348 10:18:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:32.348 10:18:16 -- common/autotest_common.sh@1477 -- # uname 00:07:32.348 10:18:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:32.348 10:18:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:32.348 10:18:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:32.608 lcov: LCOV version 1.15 00:07:32.608 10:18:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:08:19.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:19.315 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:15.571 10:19:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:15.571 10:19:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.571 10:19:55 -- common/autotest_common.sh@10 -- # set +x 00:09:15.571 10:19:55 -- spdk/autotest.sh@78 -- # rm -f 00:09:15.571 10:19:55 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:15.571 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:09:15.571 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:09:15.571 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:09:15.571 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:09:15.571 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:09:15.571 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:09:15.571 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:09:15.571 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:09:15.571 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:09:15.571 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:09:15.571 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:09:15.571 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:09:15.571 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:09:15.571 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:09:15.571 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:09:15.571 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:09:15.571 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:09:15.571 10:19:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:15.571 10:19:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:15.571 10:19:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:15.571 10:19:57 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:15.571 10:19:57 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:15.571 10:19:57 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:15.571 10:19:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:15.571 10:19:57 -- common/autotest_common.sh@1669 -- # bdf=0000:82:00.0 00:09:15.571 10:19:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:15.571 10:19:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:15.571 10:19:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:15.571 10:19:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:15.571 10:19:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:15.571 10:19:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:15.571 10:19:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:15.571 10:19:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:15.571 10:19:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:15.571 10:19:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:15.571 10:19:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:15.571 No valid GPT data, bailing 00:09:15.571 10:19:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:15.571 10:19:57 -- scripts/common.sh@394 -- # pt= 00:09:15.571 10:19:57 -- scripts/common.sh@395 -- # return 1 00:09:15.571 10:19:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:15.571 1+0 records in 00:09:15.571 1+0 records out 00:09:15.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0032549 s, 322 MB/s 00:09:15.571 10:19:57 -- spdk/autotest.sh@105 -- # sync 00:09:15.571 10:19:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:15.572 10:19:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:15.572 10:19:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:16.508 10:20:01 -- spdk/autotest.sh@111 -- # uname -s 00:09:16.508 10:20:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:16.508 10:20:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:16.508 10:20:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:18.416 Hugepages 00:09:18.416 node hugesize free / total 00:09:18.416 node0 1048576kB 0 / 0 00:09:18.416 node0 2048kB 0 / 0 00:09:18.416 node1 1048576kB 0 / 0 00:09:18.416 node1 2048kB 0 / 0 00:09:18.416 00:09:18.416 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:18.416 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:09:18.416 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:09:18.416 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:09:18.416 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:09:18.416 10:20:02 -- spdk/autotest.sh@117 -- # uname -s 00:09:18.416 10:20:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:18.416 10:20:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:18.416 10:20:02 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:20.327 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:20.328 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:20.328 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:21.264 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:09:21.264 10:20:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:22.644 10:20:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:22.644 10:20:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:22.644 10:20:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:22.644 10:20:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:22.644 10:20:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:22.644 10:20:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:22.644 10:20:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:22.644 10:20:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:22.644 10:20:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:22.645 10:20:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:22.645 10:20:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:09:22.645 10:20:06 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:24.080 Waiting for block devices as requested 00:09:24.080 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:09:24.339 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:09:24.339 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:09:24.599 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:09:24.599 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:09:24.599 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:09:24.861 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:09:24.861 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:09:24.861 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:09:25.121 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:09:25.121 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:09:25.121 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:09:25.380 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:09:25.380 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:09:25.380 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:09:25.380 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:09:25.640 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:09:25.640 10:20:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:25.640 10:20:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1487 -- # grep 0000:82:00.0/nvme/nvme 00:09:25.640 10:20:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:09:25.640 10:20:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:25.640 10:20:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:25.640 10:20:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:25.640 10:20:10 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:09:25.640 10:20:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:25.640 10:20:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:25.640 10:20:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:25.640 10:20:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:25.640 10:20:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:25.640 10:20:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:25.640 10:20:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:25.640 10:20:10 -- common/autotest_common.sh@1543 -- # continue 00:09:25.640 10:20:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:25.640 10:20:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.641 10:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:25.641 10:20:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:25.641 10:20:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.641 10:20:10 -- common/autotest_common.sh@10 -- # set +x 00:09:25.641 10:20:10 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:27.552 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:27.552 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:27.552 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:27.812 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:28.413 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:09:28.697 10:20:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:28.697 10:20:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.697 10:20:13 -- common/autotest_common.sh@10 -- # set +x 00:09:28.697 10:20:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:28.697 10:20:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:28.697 10:20:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:28.697 10:20:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:28.697 10:20:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:28.697 10:20:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:28.697 10:20:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:28.697 10:20:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:28.697 10:20:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.697 10:20:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.697 10:20:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.697 10:20:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:28.697 10:20:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.697 10:20:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:28.697 10:20:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:09:28.697 10:20:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:28.697 10:20:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:09:28.697 10:20:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:09:28.697 10:20:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:09:28.697 10:20:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:09:28.697 10:20:13 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:09:28.697 10:20:13 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:82:00.0 00:09:28.697 10:20:13 -- common/autotest_common.sh@1579 -- # [[ -z 0000:82:00.0 ]] 00:09:28.697 10:20:13 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1959316 00:09:28.697 10:20:13 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:28.697 10:20:13 -- common/autotest_common.sh@1585 -- # waitforlisten 1959316 00:09:28.697 10:20:13 -- common/autotest_common.sh@835 -- # '[' -z 1959316 ']' 00:09:28.697 10:20:13 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.697 10:20:13 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.697 10:20:13 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.697 10:20:13 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.697 10:20:13 -- common/autotest_common.sh@10 -- # set +x 00:09:29.003 [2024-12-09 10:20:13.405281] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:09:29.003 [2024-12-09 10:20:13.405383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959316 ] 00:09:29.003 [2024-12-09 10:20:13.536176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.273 [2024-12-09 10:20:13.647452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.532 10:20:14 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.532 10:20:14 -- common/autotest_common.sh@868 -- # return 0 00:09:29.532 10:20:14 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:09:29.532 10:20:14 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:09:29.532 10:20:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:09:33.725 nvme0n1 00:09:33.725 10:20:17 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:09:33.725 [2024-12-09 10:20:18.160832] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:09:33.725 [2024-12-09 10:20:18.160923] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:09:33.725 request: 00:09:33.725 { 00:09:33.725 "nvme_ctrlr_name": "nvme0", 00:09:33.725 "password": "test", 00:09:33.725 "method": "bdev_nvme_opal_revert", 00:09:33.725 "req_id": 1 00:09:33.725 } 00:09:33.725 Got JSON-RPC error response 00:09:33.725 response: 00:09:33.725 { 00:09:33.725 "code": -32603, 00:09:33.725 "message": "Internal error" 00:09:33.725 } 00:09:33.725 10:20:18 -- common/autotest_common.sh@1591 -- # true 00:09:33.725 10:20:18 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:09:33.725 10:20:18 -- common/autotest_common.sh@1595 -- # killprocess 1959316 00:09:33.725 10:20:18 -- common/autotest_common.sh@954 -- # '[' -z 1959316 ']' 00:09:33.725 10:20:18 -- common/autotest_common.sh@958 -- # kill -0 1959316 00:09:33.725 10:20:18 -- common/autotest_common.sh@959 -- # uname 00:09:33.725 10:20:18 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.725 10:20:18 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959316 00:09:33.725 10:20:18 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.725 10:20:18 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.725 10:20:18 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959316' 00:09:33.725 killing process with pid 1959316 00:09:33.725 10:20:18 -- common/autotest_common.sh@973 -- # kill 1959316 00:09:33.725 10:20:18 -- common/autotest_common.sh@978 -- # wait 1959316 00:09:36.260 10:20:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:36.260 10:20:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:36.260 10:20:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:36.260 10:20:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:36.260 10:20:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:36.260 10:20:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.260 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:09:36.260 10:20:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:36.260 10:20:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:36.260 10:20:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.260 10:20:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.260 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:09:36.260 ************************************ 00:09:36.260 START TEST env 00:09:36.260 ************************************ 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:36.260 * Looking for test storage... 00:09:36.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.260 10:20:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.260 10:20:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.260 10:20:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.260 10:20:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.260 10:20:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.260 10:20:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.260 10:20:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.260 10:20:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.260 10:20:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.260 10:20:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.260 10:20:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.260 10:20:20 env -- scripts/common.sh@344 -- # case "$op" in 00:09:36.260 10:20:20 env -- scripts/common.sh@345 -- # : 1 00:09:36.260 10:20:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.260 10:20:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.260 10:20:20 env -- scripts/common.sh@365 -- # decimal 1 00:09:36.260 10:20:20 env -- scripts/common.sh@353 -- # local d=1 00:09:36.260 10:20:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.260 10:20:20 env -- scripts/common.sh@355 -- # echo 1 00:09:36.260 10:20:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.260 10:20:20 env -- scripts/common.sh@366 -- # decimal 2 00:09:36.260 10:20:20 env -- scripts/common.sh@353 -- # local d=2 00:09:36.260 10:20:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.260 10:20:20 env -- scripts/common.sh@355 -- # echo 2 00:09:36.260 10:20:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.260 10:20:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.260 10:20:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.260 10:20:20 env -- scripts/common.sh@368 -- # return 0 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.260 --rc genhtml_branch_coverage=1 00:09:36.260 --rc genhtml_function_coverage=1 00:09:36.260 --rc genhtml_legend=1 00:09:36.260 --rc geninfo_all_blocks=1 00:09:36.260 --rc geninfo_unexecuted_blocks=1 00:09:36.260 00:09:36.260 ' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.260 --rc genhtml_branch_coverage=1 00:09:36.260 --rc genhtml_function_coverage=1 00:09:36.260 --rc genhtml_legend=1 00:09:36.260 --rc geninfo_all_blocks=1 00:09:36.260 --rc geninfo_unexecuted_blocks=1 00:09:36.260 00:09:36.260 ' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.260 --rc genhtml_branch_coverage=1 00:09:36.260 --rc genhtml_function_coverage=1 00:09:36.260 --rc genhtml_legend=1 00:09:36.260 --rc geninfo_all_blocks=1 00:09:36.260 --rc geninfo_unexecuted_blocks=1 00:09:36.260 00:09:36.260 ' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.260 --rc genhtml_branch_coverage=1 00:09:36.260 --rc genhtml_function_coverage=1 00:09:36.260 --rc genhtml_legend=1 00:09:36.260 --rc geninfo_all_blocks=1 00:09:36.260 --rc geninfo_unexecuted_blocks=1 00:09:36.260 00:09:36.260 ' 00:09:36.260 10:20:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.260 10:20:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.260 10:20:20 env -- common/autotest_common.sh@10 -- # set +x 00:09:36.260 ************************************ 00:09:36.260 START TEST env_memory 00:09:36.260 ************************************ 00:09:36.260 10:20:20 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:36.260 00:09:36.260 00:09:36.260 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.260 http://cunit.sourceforge.net/ 00:09:36.260 00:09:36.260 00:09:36.260 Suite: memory 00:09:36.260 Test: alloc and free memory map ...[2024-12-09 10:20:20.808924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:36.260 passed 00:09:36.260 Test: mem map translation ...[2024-12-09 10:20:20.867462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:36.260 [2024-12-09 10:20:20.867522] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:36.260 [2024-12-09 10:20:20.867638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:36.260 [2024-12-09 10:20:20.867671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:36.519 passed 00:09:36.519 Test: mem map registration ...[2024-12-09 10:20:20.993164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:36.519 [2024-12-09 10:20:20.993218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:36.519 passed 00:09:36.519 Test: mem map adjacent registrations ...passed 00:09:36.519 00:09:36.519 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.519 suites 1 1 n/a 0 0 00:09:36.519 tests 4 4 4 0 0 00:09:36.520 asserts 152 152 152 0 n/a 00:09:36.520 00:09:36.520 Elapsed time = 0.408 seconds 00:09:36.520 00:09:36.520 real 0m0.424s 00:09:36.520 user 0m0.407s 00:09:36.520 sys 0m0.015s 00:09:36.520 10:20:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.520 10:20:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:36.520 ************************************ 00:09:36.520 END TEST env_memory 00:09:36.520 ************************************ 00:09:36.779 10:20:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:36.780 10:20:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.780 10:20:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.780 10:20:21 env -- common/autotest_common.sh@10 -- # set +x 00:09:36.780 ************************************ 00:09:36.780 START TEST env_vtophys 00:09:36.780 ************************************ 00:09:36.780 10:20:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:36.780 EAL: lib.eal log level changed from notice to debug 00:09:36.780 EAL: Detected lcore 0 as core 0 on socket 0 00:09:36.780 EAL: Detected lcore 1 as core 1 on socket 0 00:09:36.780 EAL: Detected lcore 2 as core 2 on socket 0 00:09:36.780 EAL: Detected lcore 3 as core 3 on socket 0 00:09:36.780 EAL: Detected lcore 4 as core 4 on socket 0 00:09:36.780 EAL: Detected lcore 5 as core 5 on socket 0 00:09:36.780 EAL: Detected lcore 6 as core 8 on socket 0 00:09:36.780 EAL: Detected lcore 7 as core 9 on socket 0 00:09:36.780 EAL: Detected lcore 8 as core 10 on socket 0 00:09:36.780 EAL: Detected lcore 9 as core 11 on socket 0 00:09:36.780 EAL: Detected lcore 10 as core 12 on socket 0 00:09:36.780 EAL: Detected lcore 11 as core 13 on socket 0 00:09:36.780 EAL: Detected lcore 12 as core 0 on socket 1 00:09:36.780 EAL: Detected lcore 13 as core 1 on socket 1 00:09:36.780 EAL: Detected lcore 14 as core 2 on socket 1 00:09:36.780 EAL: Detected lcore 15 as core 3 on socket 1 00:09:36.780 EAL: Detected lcore 16 as core 4 on socket 1 00:09:36.780 EAL: Detected lcore 17 as core 5 on socket 1 00:09:36.780 EAL: Detected lcore 18 as core 8 on socket 1 00:09:36.780 EAL: Detected lcore 19 as core 9 on socket 1 00:09:36.780 EAL: Detected lcore 20 as core 10 on socket 1 00:09:36.780 EAL: Detected lcore 21 as core 11 on socket 1 00:09:36.780 EAL: Detected lcore 22 as core 12 on socket 1 00:09:36.780 EAL: Detected lcore 23 as core 13 on socket 1 00:09:36.780 EAL: Detected lcore 24 as core 0 on socket 0 00:09:36.780 EAL: Detected lcore 25 as core 1 on socket 0 00:09:36.780 EAL: Detected lcore 26 as core 2 on socket 0 00:09:36.780 EAL: Detected lcore 27 as core 3 on socket 0 00:09:36.780 EAL: Detected lcore 28 as core 4 on socket 0 00:09:36.780 EAL: Detected lcore 29 as core 5 on socket 0 00:09:36.780 EAL: Detected lcore 30 as core 8 on socket 0 00:09:36.780 EAL: Detected lcore 31 as core 9 on socket 0 00:09:36.780 EAL: Detected lcore 32 as core 10 on socket 0 00:09:36.780 EAL: Detected lcore 33 as core 11 on socket 0 00:09:36.780 EAL: Detected lcore 34 as core 12 on socket 0 00:09:36.780 EAL: Detected lcore 35 as core 13 on socket 0 00:09:36.780 EAL: Detected lcore 36 as core 0 on socket 1 00:09:36.780 EAL: Detected lcore 37 as core 1 on socket 1 00:09:36.780 EAL: Detected lcore 38 as core 2 on socket 1 00:09:36.780 EAL: Detected lcore 39 as core 3 on socket 1 00:09:36.780 EAL: Detected lcore 40 as core 4 on socket 1 00:09:36.780 EAL: Detected lcore 41 as core 5 on socket 1 00:09:36.780 EAL: Detected lcore 42 as core 8 on socket 1 00:09:36.780 EAL: Detected lcore 43 as core 9 on socket 1 00:09:36.780 EAL: Detected lcore 44 as core 10 on socket 1 00:09:36.780 EAL: Detected lcore 45 as core 11 on socket 1 00:09:36.780 EAL: Detected lcore 46 as core 12 on socket 1 00:09:36.780 EAL: Detected lcore 47 as core 13 on socket 1 00:09:36.780 EAL: Maximum logical cores by configuration: 128 00:09:36.780 EAL: Detected CPU lcores: 48 00:09:36.780 EAL: Detected NUMA nodes: 2 00:09:36.780 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:36.780 EAL: Detected shared linkage of DPDK 00:09:36.780 EAL: No shared files mode enabled, IPC will be disabled 00:09:36.780 EAL: Bus pci wants IOVA as 'DC' 00:09:36.780 EAL: Buses did not request a specific IOVA mode. 00:09:36.780 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:36.780 EAL: Selected IOVA mode 'VA' 00:09:36.780 EAL: Probing VFIO support... 00:09:36.780 EAL: IOMMU type 1 (Type 1) is supported 00:09:36.780 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:36.780 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:36.780 EAL: VFIO support initialized 00:09:36.780 EAL: Ask a virtual area of 0x2e000 bytes 00:09:36.780 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:36.780 EAL: Setting up physically contiguous memory... 00:09:36.780 EAL: Setting maximum number of open files to 524288 00:09:36.780 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:36.780 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:36.780 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:36.780 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:36.780 EAL: Ask a virtual area of 0x61000 bytes 00:09:36.780 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:36.780 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:36.780 EAL: Ask a virtual area of 0x400000000 bytes 00:09:36.780 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:36.780 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:36.780 EAL: Hugepages will be freed exactly as allocated. 00:09:36.780 EAL: No shared files mode enabled, IPC is disabled 00:09:36.780 EAL: No shared files mode enabled, IPC is disabled 00:09:36.780 EAL: TSC frequency is ~2700000 KHz 00:09:36.780 EAL: Main lcore 0 is ready (tid=7fe1052a0a00;cpuset=[0]) 00:09:36.780 EAL: Trying to obtain current memory policy. 00:09:36.780 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.780 EAL: Restoring previous memory policy: 0 00:09:36.780 EAL: request: mp_malloc_sync 00:09:36.780 EAL: No shared files mode enabled, IPC is disabled 00:09:36.780 EAL: Heap on socket 0 was expanded by 2MB 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:36.781 EAL: Mem event callback 'spdk:(nil)' registered 00:09:36.781 00:09:36.781 00:09:36.781 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.781 http://cunit.sourceforge.net/ 00:09:36.781 00:09:36.781 00:09:36.781 Suite: components_suite 00:09:36.781 Test: vtophys_malloc_test ...passed 00:09:36.781 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 4MB 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was shrunk by 4MB 00:09:36.781 EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 6MB 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was shrunk by 6MB 00:09:36.781 EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 10MB 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was shrunk by 10MB 00:09:36.781 EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 18MB 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was shrunk by 18MB 00:09:36.781 EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 34MB 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was shrunk by 34MB 00:09:36.781 EAL: Trying to obtain current memory policy. 00:09:36.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:36.781 EAL: Restoring previous memory policy: 4 00:09:36.781 EAL: Calling mem event callback 'spdk:(nil)' 00:09:36.781 EAL: request: mp_malloc_sync 00:09:36.781 EAL: No shared files mode enabled, IPC is disabled 00:09:36.781 EAL: Heap on socket 0 was expanded by 66MB 00:09:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.040 EAL: request: mp_malloc_sync 00:09:37.040 EAL: No shared files mode enabled, IPC is disabled 00:09:37.040 EAL: Heap on socket 0 was shrunk by 66MB 00:09:37.040 EAL: Trying to obtain current memory policy. 00:09:37.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.040 EAL: Restoring previous memory policy: 4 00:09:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.040 EAL: request: mp_malloc_sync 00:09:37.040 EAL: No shared files mode enabled, IPC is disabled 00:09:37.040 EAL: Heap on socket 0 was expanded by 130MB 00:09:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.040 EAL: request: mp_malloc_sync 00:09:37.040 EAL: No shared files mode enabled, IPC is disabled 00:09:37.040 EAL: Heap on socket 0 was shrunk by 130MB 00:09:37.040 EAL: Trying to obtain current memory policy. 00:09:37.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.040 EAL: Restoring previous memory policy: 4 00:09:37.040 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.040 EAL: request: mp_malloc_sync 00:09:37.040 EAL: No shared files mode enabled, IPC is disabled 00:09:37.040 EAL: Heap on socket 0 was expanded by 258MB 00:09:37.299 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.299 EAL: request: mp_malloc_sync 00:09:37.299 EAL: No shared files mode enabled, IPC is disabled 00:09:37.299 EAL: Heap on socket 0 was shrunk by 258MB 00:09:37.299 EAL: Trying to obtain current memory policy. 00:09:37.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.558 EAL: Restoring previous memory policy: 4 00:09:37.558 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.558 EAL: request: mp_malloc_sync 00:09:37.558 EAL: No shared files mode enabled, IPC is disabled 00:09:37.558 EAL: Heap on socket 0 was expanded by 514MB 00:09:37.817 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.817 EAL: request: mp_malloc_sync 00:09:37.817 EAL: No shared files mode enabled, IPC is disabled 00:09:37.817 EAL: Heap on socket 0 was shrunk by 514MB 00:09:37.817 EAL: Trying to obtain current memory policy. 00:09:37.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:38.384 EAL: Restoring previous memory policy: 4 00:09:38.384 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.384 EAL: request: mp_malloc_sync 00:09:38.384 EAL: No shared files mode enabled, IPC is disabled 00:09:38.384 EAL: Heap on socket 0 was expanded by 1026MB 00:09:38.643 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.902 EAL: request: mp_malloc_sync 00:09:38.902 EAL: No shared files mode enabled, IPC is disabled 00:09:38.902 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:38.902 passed 00:09:38.902 00:09:38.902 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.902 suites 1 1 n/a 0 0 00:09:38.902 tests 2 2 2 0 0 00:09:38.902 asserts 497 497 497 0 n/a 00:09:38.902 00:09:38.902 Elapsed time = 2.126 seconds 00:09:38.902 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.902 EAL: request: mp_malloc_sync 00:09:38.902 EAL: No shared files mode enabled, IPC is disabled 00:09:38.902 EAL: Heap on socket 0 was shrunk by 2MB 00:09:38.902 EAL: No shared files mode enabled, IPC is disabled 00:09:38.902 EAL: No shared files mode enabled, IPC is disabled 00:09:38.902 EAL: No shared files mode enabled, IPC is disabled 00:09:38.902 00:09:38.902 real 0m2.330s 00:09:38.902 user 0m1.273s 00:09:38.902 sys 0m1.005s 00:09:38.902 10:20:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.902 10:20:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:38.902 ************************************ 00:09:38.902 END TEST env_vtophys 00:09:38.902 ************************************ 00:09:39.161 10:20:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:39.161 10:20:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.161 10:20:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.161 10:20:23 env -- common/autotest_common.sh@10 -- # set +x 00:09:39.161 ************************************ 00:09:39.161 START TEST env_pci 00:09:39.161 ************************************ 00:09:39.161 10:20:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:39.161 00:09:39.161 00:09:39.161 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.161 http://cunit.sourceforge.net/ 00:09:39.161 00:09:39.161 00:09:39.161 Suite: pci 00:09:39.161 Test: pci_hook ...[2024-12-09 10:20:23.628409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1960530 has claimed it 00:09:39.161 EAL: Cannot find device (10000:00:01.0) 00:09:39.161 EAL: Failed to attach device on primary process 00:09:39.161 passed 00:09:39.161 00:09:39.161 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.161 suites 1 1 n/a 0 0 00:09:39.161 tests 1 1 1 0 0 00:09:39.161 asserts 25 25 25 0 n/a 00:09:39.161 00:09:39.161 Elapsed time = 0.047 seconds 00:09:39.161 00:09:39.161 real 0m0.075s 00:09:39.161 user 0m0.022s 00:09:39.161 sys 0m0.052s 00:09:39.161 10:20:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.161 10:20:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:39.161 ************************************ 00:09:39.161 END TEST env_pci 00:09:39.161 ************************************ 00:09:39.161 10:20:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:39.161 10:20:23 env -- env/env.sh@15 -- # uname 00:09:39.161 10:20:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:39.161 10:20:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:39.161 10:20:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:39.161 10:20:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.161 10:20:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.161 10:20:23 env -- common/autotest_common.sh@10 -- # set +x 00:09:39.161 ************************************ 00:09:39.161 START TEST env_dpdk_post_init 00:09:39.161 ************************************ 00:09:39.161 10:20:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:39.419 EAL: Detected CPU lcores: 48 00:09:39.419 EAL: Detected NUMA nodes: 2 00:09:39.419 EAL: Detected shared linkage of DPDK 00:09:39.419 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:39.419 EAL: Selected IOVA mode 'VA' 00:09:39.419 EAL: VFIO support initialized 00:09:39.419 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:39.419 EAL: Using IOMMU type 1 (Type 1) 00:09:39.419 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:09:39.419 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:09:39.678 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:09:40.619 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:09:43.922 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:09:43.922 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:09:43.922 Starting DPDK initialization... 00:09:43.922 Starting SPDK post initialization... 00:09:43.922 SPDK NVMe probe 00:09:43.922 Attaching to 0000:82:00.0 00:09:43.922 Attached to 0000:82:00.0 00:09:43.922 Cleaning up... 00:09:43.922 00:09:43.922 real 0m4.562s 00:09:43.922 user 0m3.077s 00:09:43.922 sys 0m0.539s 00:09:43.923 10:20:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.923 10:20:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:43.923 ************************************ 00:09:43.923 END TEST env_dpdk_post_init 00:09:43.923 ************************************ 00:09:43.923 10:20:28 env -- env/env.sh@26 -- # uname 00:09:43.923 10:20:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:43.923 10:20:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:43.923 10:20:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.923 10:20:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.923 10:20:28 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.923 ************************************ 00:09:43.923 START TEST env_mem_callbacks 00:09:43.923 ************************************ 00:09:43.923 10:20:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:43.923 EAL: Detected CPU lcores: 48 00:09:43.923 EAL: Detected NUMA nodes: 2 00:09:43.923 EAL: Detected shared linkage of DPDK 00:09:43.923 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:43.923 EAL: Selected IOVA mode 'VA' 00:09:43.923 EAL: VFIO support initialized 00:09:43.923 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:43.923 00:09:43.923 00:09:43.923 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.923 http://cunit.sourceforge.net/ 00:09:43.923 00:09:43.923 00:09:43.923 Suite: memory 00:09:43.923 Test: test ... 00:09:43.923 register 0x200000200000 2097152 00:09:43.923 malloc 3145728 00:09:43.923 register 0x200000400000 4194304 00:09:43.923 buf 0x200000500000 len 3145728 PASSED 00:09:43.923 malloc 64 00:09:43.923 buf 0x2000004fff40 len 64 PASSED 00:09:43.923 malloc 4194304 00:09:43.923 register 0x200000800000 6291456 00:09:43.923 buf 0x200000a00000 len 4194304 PASSED 00:09:43.923 free 0x200000500000 3145728 00:09:43.923 free 0x2000004fff40 64 00:09:43.923 unregister 0x200000400000 4194304 PASSED 00:09:43.923 free 0x200000a00000 4194304 00:09:43.923 unregister 0x200000800000 6291456 PASSED 00:09:43.923 malloc 8388608 00:09:43.923 register 0x200000400000 10485760 00:09:43.923 buf 0x200000600000 len 8388608 PASSED 00:09:43.923 free 0x200000600000 8388608 00:09:43.923 unregister 0x200000400000 10485760 PASSED 00:09:43.923 passed 00:09:43.923 00:09:43.923 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.923 suites 1 1 n/a 0 0 00:09:43.923 tests 1 1 1 0 0 00:09:43.923 asserts 15 15 15 0 n/a 00:09:43.923 00:09:43.923 Elapsed time = 0.011 seconds 00:09:43.923 00:09:43.923 real 0m0.106s 00:09:43.923 user 0m0.032s 00:09:43.923 sys 0m0.072s 00:09:43.923 10:20:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.923 10:20:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:43.923 ************************************ 00:09:43.923 END TEST env_mem_callbacks 00:09:43.923 ************************************ 00:09:43.923 00:09:43.923 real 0m8.155s 00:09:43.923 user 0m5.165s 00:09:43.923 sys 0m2.017s 00:09:43.923 10:20:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.923 10:20:28 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.923 ************************************ 00:09:43.923 END TEST env 00:09:43.923 ************************************ 00:09:44.181 10:20:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:44.181 10:20:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.181 10:20:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.181 10:20:28 -- common/autotest_common.sh@10 -- # set +x 00:09:44.181 ************************************ 00:09:44.181 START TEST rpc 00:09:44.181 ************************************ 00:09:44.181 10:20:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:44.181 * Looking for test storage... 00:09:44.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:44.181 10:20:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.181 10:20:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.181 10:20:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.181 10:20:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.181 10:20:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.181 10:20:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.181 10:20:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.181 10:20:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.181 10:20:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.181 10:20:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.182 10:20:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.182 10:20:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.182 10:20:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:44.182 10:20:28 rpc -- scripts/common.sh@345 -- # : 1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.182 10:20:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.182 10:20:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@353 -- # local d=1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.182 10:20:28 rpc -- scripts/common.sh@355 -- # echo 1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.182 10:20:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@353 -- # local d=2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.182 10:20:28 rpc -- scripts/common.sh@355 -- # echo 2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.182 10:20:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.182 10:20:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.182 10:20:28 rpc -- scripts/common.sh@368 -- # return 0 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.182 --rc genhtml_branch_coverage=1 00:09:44.182 --rc genhtml_function_coverage=1 00:09:44.182 --rc genhtml_legend=1 00:09:44.182 --rc geninfo_all_blocks=1 00:09:44.182 --rc geninfo_unexecuted_blocks=1 00:09:44.182 00:09:44.182 ' 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.182 --rc genhtml_branch_coverage=1 00:09:44.182 --rc genhtml_function_coverage=1 00:09:44.182 --rc genhtml_legend=1 00:09:44.182 --rc geninfo_all_blocks=1 00:09:44.182 --rc geninfo_unexecuted_blocks=1 00:09:44.182 00:09:44.182 ' 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.182 --rc genhtml_branch_coverage=1 00:09:44.182 --rc genhtml_function_coverage=1 00:09:44.182 --rc genhtml_legend=1 00:09:44.182 --rc geninfo_all_blocks=1 00:09:44.182 --rc geninfo_unexecuted_blocks=1 00:09:44.182 00:09:44.182 ' 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.182 --rc genhtml_branch_coverage=1 00:09:44.182 --rc genhtml_function_coverage=1 00:09:44.182 --rc genhtml_legend=1 00:09:44.182 --rc geninfo_all_blocks=1 00:09:44.182 --rc geninfo_unexecuted_blocks=1 00:09:44.182 00:09:44.182 ' 00:09:44.182 10:20:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1961267 00:09:44.182 10:20:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:44.182 10:20:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.182 10:20:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1961267 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 1961267 ']' 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.182 10:20:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.440 [2024-12-09 10:20:28.897987] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:09:44.440 [2024-12-09 10:20:28.898097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961267 ] 00:09:44.440 [2024-12-09 10:20:28.981568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.440 [2024-12-09 10:20:29.084857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:44.440 [2024-12-09 10:20:29.084963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1961267' to capture a snapshot of events at runtime. 00:09:44.440 [2024-12-09 10:20:29.085002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.440 [2024-12-09 10:20:29.085033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.440 [2024-12-09 10:20:29.085059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1961267 for offline analysis/debug. 00:09:44.440 [2024-12-09 10:20:29.086366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.009 10:20:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.009 10:20:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:45.009 10:20:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:45.009 10:20:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:45.009 10:20:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:45.009 10:20:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:45.009 10:20:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.010 10:20:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.010 10:20:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.010 ************************************ 00:09:45.010 START TEST rpc_integrity 00:09:45.010 ************************************ 00:09:45.010 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:45.010 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:45.010 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.010 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.010 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.010 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:45.010 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:45.268 { 00:09:45.268 "name": "Malloc0", 00:09:45.268 "aliases": [ 00:09:45.268 "9f289f4a-1f39-4d34-8071-6b23402bace5" 00:09:45.268 ], 00:09:45.268 "product_name": "Malloc disk", 00:09:45.268 "block_size": 512, 00:09:45.268 "num_blocks": 16384, 00:09:45.268 "uuid": "9f289f4a-1f39-4d34-8071-6b23402bace5", 00:09:45.268 "assigned_rate_limits": { 00:09:45.268 "rw_ios_per_sec": 0, 00:09:45.268 "rw_mbytes_per_sec": 0, 00:09:45.268 "r_mbytes_per_sec": 0, 00:09:45.268 "w_mbytes_per_sec": 0 00:09:45.268 }, 00:09:45.268 "claimed": false, 00:09:45.268 "zoned": false, 00:09:45.268 "supported_io_types": { 00:09:45.268 "read": true, 00:09:45.268 "write": true, 00:09:45.268 "unmap": true, 00:09:45.268 "flush": true, 00:09:45.268 "reset": true, 00:09:45.268 "nvme_admin": false, 00:09:45.268 "nvme_io": false, 00:09:45.268 "nvme_io_md": false, 00:09:45.268 "write_zeroes": true, 00:09:45.268 "zcopy": true, 00:09:45.268 "get_zone_info": false, 00:09:45.268 "zone_management": false, 00:09:45.268 "zone_append": false, 00:09:45.268 "compare": false, 00:09:45.268 "compare_and_write": false, 00:09:45.268 "abort": true, 00:09:45.268 "seek_hole": false, 00:09:45.268 "seek_data": false, 00:09:45.268 "copy": true, 00:09:45.268 "nvme_iov_md": false 00:09:45.268 }, 00:09:45.268 "memory_domains": [ 00:09:45.268 { 00:09:45.268 "dma_device_id": "system", 00:09:45.268 "dma_device_type": 1 00:09:45.268 }, 00:09:45.268 { 00:09:45.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.268 "dma_device_type": 2 00:09:45.268 } 00:09:45.268 ], 00:09:45.268 "driver_specific": {} 00:09:45.268 } 00:09:45.268 ]' 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.268 [2024-12-09 10:20:29.848697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:45.268 [2024-12-09 10:20:29.848805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.268 [2024-12-09 10:20:29.848832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e66a0 00:09:45.268 [2024-12-09 10:20:29.848848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.268 [2024-12-09 10:20:29.851794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.268 [2024-12-09 10:20:29.851824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:45.268 Passthru0 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.268 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.268 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:45.268 { 00:09:45.268 "name": "Malloc0", 00:09:45.268 "aliases": [ 00:09:45.268 "9f289f4a-1f39-4d34-8071-6b23402bace5" 00:09:45.268 ], 00:09:45.268 "product_name": "Malloc disk", 00:09:45.268 "block_size": 512, 00:09:45.268 "num_blocks": 16384, 00:09:45.268 "uuid": "9f289f4a-1f39-4d34-8071-6b23402bace5", 00:09:45.268 "assigned_rate_limits": { 00:09:45.268 "rw_ios_per_sec": 0, 00:09:45.268 "rw_mbytes_per_sec": 0, 00:09:45.268 "r_mbytes_per_sec": 0, 00:09:45.268 "w_mbytes_per_sec": 0 00:09:45.268 }, 00:09:45.268 "claimed": true, 00:09:45.268 "claim_type": "exclusive_write", 00:09:45.268 "zoned": false, 00:09:45.268 "supported_io_types": { 00:09:45.268 "read": true, 00:09:45.268 "write": true, 00:09:45.268 "unmap": true, 00:09:45.268 "flush": true, 00:09:45.268 "reset": true, 00:09:45.268 "nvme_admin": false, 00:09:45.268 "nvme_io": false, 00:09:45.268 "nvme_io_md": false, 00:09:45.268 "write_zeroes": true, 00:09:45.268 "zcopy": true, 00:09:45.268 "get_zone_info": false, 00:09:45.268 "zone_management": false, 00:09:45.268 "zone_append": false, 00:09:45.269 "compare": false, 00:09:45.269 "compare_and_write": false, 00:09:45.269 "abort": true, 00:09:45.269 "seek_hole": false, 00:09:45.269 "seek_data": false, 00:09:45.269 "copy": true, 00:09:45.269 "nvme_iov_md": false 00:09:45.269 }, 00:09:45.269 "memory_domains": [ 00:09:45.269 { 00:09:45.269 "dma_device_id": "system", 00:09:45.269 "dma_device_type": 1 00:09:45.269 }, 00:09:45.269 { 00:09:45.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.269 "dma_device_type": 2 00:09:45.269 } 00:09:45.269 ], 00:09:45.269 "driver_specific": {} 00:09:45.269 }, 00:09:45.269 { 00:09:45.269 "name": "Passthru0", 00:09:45.269 "aliases": [ 00:09:45.269 "5cf68f60-81c7-50f0-ae9f-535f8a81bbce" 00:09:45.269 ], 00:09:45.269 "product_name": "passthru", 00:09:45.269 "block_size": 512, 00:09:45.269 "num_blocks": 16384, 00:09:45.269 "uuid": "5cf68f60-81c7-50f0-ae9f-535f8a81bbce", 00:09:45.269 "assigned_rate_limits": { 00:09:45.269 "rw_ios_per_sec": 0, 00:09:45.269 "rw_mbytes_per_sec": 0, 00:09:45.269 "r_mbytes_per_sec": 0, 00:09:45.269 "w_mbytes_per_sec": 0 00:09:45.269 }, 00:09:45.269 "claimed": false, 00:09:45.269 "zoned": false, 00:09:45.269 "supported_io_types": { 00:09:45.269 "read": true, 00:09:45.269 "write": true, 00:09:45.269 "unmap": true, 00:09:45.269 "flush": true, 00:09:45.269 "reset": true, 00:09:45.269 "nvme_admin": false, 00:09:45.269 "nvme_io": false, 00:09:45.269 "nvme_io_md": false, 00:09:45.269 "write_zeroes": true, 00:09:45.269 "zcopy": true, 00:09:45.269 "get_zone_info": false, 00:09:45.269 "zone_management": false, 00:09:45.269 "zone_append": false, 00:09:45.269 "compare": false, 00:09:45.269 "compare_and_write": false, 00:09:45.269 "abort": true, 00:09:45.269 "seek_hole": false, 00:09:45.269 "seek_data": false, 00:09:45.269 "copy": true, 00:09:45.269 "nvme_iov_md": false 00:09:45.269 }, 00:09:45.269 "memory_domains": [ 00:09:45.269 { 00:09:45.269 "dma_device_id": "system", 00:09:45.269 "dma_device_type": 1 00:09:45.269 }, 00:09:45.269 { 00:09:45.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.269 "dma_device_type": 2 00:09:45.269 } 00:09:45.269 ], 00:09:45.269 "driver_specific": { 00:09:45.269 "passthru": { 00:09:45.269 "name": "Passthru0", 00:09:45.269 "base_bdev_name": "Malloc0" 00:09:45.269 } 00:09:45.269 } 00:09:45.269 } 00:09:45.269 ]' 00:09:45.269 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 10:20:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:45.528 10:20:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:45.528 10:20:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:45.528 00:09:45.528 real 0m0.452s 00:09:45.528 user 0m0.344s 00:09:45.528 sys 0m0.046s 00:09:45.528 10:20:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.528 10:20:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 ************************************ 00:09:45.528 END TEST rpc_integrity 00:09:45.528 ************************************ 00:09:45.528 10:20:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:45.528 10:20:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.528 10:20:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.528 10:20:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 ************************************ 00:09:45.528 START TEST rpc_plugins 00:09:45.528 ************************************ 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:45.528 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.528 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:45.528 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.528 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:45.787 { 00:09:45.787 "name": "Malloc1", 00:09:45.787 "aliases": [ 00:09:45.787 "9ddf32de-d833-473b-b32f-092983d103e1" 00:09:45.787 ], 00:09:45.787 "product_name": "Malloc disk", 00:09:45.787 "block_size": 4096, 00:09:45.787 "num_blocks": 256, 00:09:45.787 "uuid": "9ddf32de-d833-473b-b32f-092983d103e1", 00:09:45.787 "assigned_rate_limits": { 00:09:45.787 "rw_ios_per_sec": 0, 00:09:45.787 "rw_mbytes_per_sec": 0, 00:09:45.787 "r_mbytes_per_sec": 0, 00:09:45.787 "w_mbytes_per_sec": 0 00:09:45.787 }, 00:09:45.787 "claimed": false, 00:09:45.787 "zoned": false, 00:09:45.787 "supported_io_types": { 00:09:45.787 "read": true, 00:09:45.787 "write": true, 00:09:45.787 "unmap": true, 00:09:45.787 "flush": true, 00:09:45.787 "reset": true, 00:09:45.787 "nvme_admin": false, 00:09:45.787 "nvme_io": false, 00:09:45.787 "nvme_io_md": false, 00:09:45.787 "write_zeroes": true, 00:09:45.787 "zcopy": true, 00:09:45.787 "get_zone_info": false, 00:09:45.787 "zone_management": false, 00:09:45.787 "zone_append": false, 00:09:45.787 "compare": false, 00:09:45.787 "compare_and_write": false, 00:09:45.787 "abort": true, 00:09:45.787 "seek_hole": false, 00:09:45.787 "seek_data": false, 00:09:45.787 "copy": true, 00:09:45.787 "nvme_iov_md": false 00:09:45.787 }, 00:09:45.787 "memory_domains": [ 00:09:45.787 { 00:09:45.787 "dma_device_id": "system", 00:09:45.787 "dma_device_type": 1 00:09:45.787 }, 00:09:45.787 { 00:09:45.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.787 "dma_device_type": 2 00:09:45.787 } 00:09:45.787 ], 00:09:45.787 "driver_specific": {} 00:09:45.787 } 00:09:45.787 ]' 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:45.787 10:20:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:45.787 00:09:45.787 real 0m0.224s 00:09:45.787 user 0m0.164s 00:09:45.787 sys 0m0.020s 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.787 10:20:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:45.787 ************************************ 00:09:45.787 END TEST rpc_plugins 00:09:45.787 ************************************ 00:09:45.787 10:20:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:45.787 10:20:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.787 10:20:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.787 10:20:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.787 ************************************ 00:09:45.787 START TEST rpc_trace_cmd_test 00:09:45.787 ************************************ 00:09:45.787 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:45.787 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:45.787 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:45.787 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.787 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:46.046 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1961267", 00:09:46.046 "tpoint_group_mask": "0x8", 00:09:46.046 "iscsi_conn": { 00:09:46.046 "mask": "0x2", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "scsi": { 00:09:46.046 "mask": "0x4", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "bdev": { 00:09:46.046 "mask": "0x8", 00:09:46.046 "tpoint_mask": "0xffffffffffffffff" 00:09:46.046 }, 00:09:46.046 "nvmf_rdma": { 00:09:46.046 "mask": "0x10", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "nvmf_tcp": { 00:09:46.046 "mask": "0x20", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "ftl": { 00:09:46.046 "mask": "0x40", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "blobfs": { 00:09:46.046 "mask": "0x80", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "dsa": { 00:09:46.046 "mask": "0x200", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "thread": { 00:09:46.046 "mask": "0x400", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "nvme_pcie": { 00:09:46.046 "mask": "0x800", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "iaa": { 00:09:46.046 "mask": "0x1000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "nvme_tcp": { 00:09:46.046 "mask": "0x2000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "bdev_nvme": { 00:09:46.046 "mask": "0x4000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "sock": { 00:09:46.046 "mask": "0x8000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "blob": { 00:09:46.046 "mask": "0x10000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "bdev_raid": { 00:09:46.046 "mask": "0x20000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 }, 00:09:46.046 "scheduler": { 00:09:46.046 "mask": "0x40000", 00:09:46.046 "tpoint_mask": "0x0" 00:09:46.046 } 00:09:46.046 }' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:46.046 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:46.306 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:46.306 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:46.306 10:20:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:46.306 00:09:46.306 real 0m0.349s 00:09:46.306 user 0m0.308s 00:09:46.306 sys 0m0.030s 00:09:46.306 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.306 10:20:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.306 ************************************ 00:09:46.306 END TEST rpc_trace_cmd_test 00:09:46.306 ************************************ 00:09:46.306 10:20:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:46.306 10:20:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:46.306 10:20:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:46.306 10:20:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.306 10:20:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.306 10:20:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.306 ************************************ 00:09:46.306 START TEST rpc_daemon_integrity 00:09:46.306 ************************************ 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.306 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:46.567 { 00:09:46.567 "name": "Malloc2", 00:09:46.567 "aliases": [ 00:09:46.567 "52c4fb53-951b-437f-ab24-89f434a13c41" 00:09:46.567 ], 00:09:46.567 "product_name": "Malloc disk", 00:09:46.567 "block_size": 512, 00:09:46.567 "num_blocks": 16384, 00:09:46.567 "uuid": "52c4fb53-951b-437f-ab24-89f434a13c41", 00:09:46.567 "assigned_rate_limits": { 00:09:46.567 "rw_ios_per_sec": 0, 00:09:46.567 "rw_mbytes_per_sec": 0, 00:09:46.567 "r_mbytes_per_sec": 0, 00:09:46.567 "w_mbytes_per_sec": 0 00:09:46.567 }, 00:09:46.567 "claimed": false, 00:09:46.567 "zoned": false, 00:09:46.567 "supported_io_types": { 00:09:46.567 "read": true, 00:09:46.567 "write": true, 00:09:46.567 "unmap": true, 00:09:46.567 "flush": true, 00:09:46.567 "reset": true, 00:09:46.567 "nvme_admin": false, 00:09:46.567 "nvme_io": false, 00:09:46.567 "nvme_io_md": false, 00:09:46.567 "write_zeroes": true, 00:09:46.567 "zcopy": true, 00:09:46.567 "get_zone_info": false, 00:09:46.567 "zone_management": false, 00:09:46.567 "zone_append": false, 00:09:46.567 "compare": false, 00:09:46.567 "compare_and_write": false, 00:09:46.567 "abort": true, 00:09:46.567 "seek_hole": false, 00:09:46.567 "seek_data": false, 00:09:46.567 "copy": true, 00:09:46.567 "nvme_iov_md": false 00:09:46.567 }, 00:09:46.567 "memory_domains": [ 00:09:46.567 { 00:09:46.567 "dma_device_id": "system", 00:09:46.567 "dma_device_type": 1 00:09:46.567 }, 00:09:46.567 { 00:09:46.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.567 "dma_device_type": 2 00:09:46.567 } 00:09:46.567 ], 00:09:46.567 "driver_specific": {} 00:09:46.567 } 00:09:46.567 ]' 00:09:46.567 10:20:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 [2024-12-09 10:20:31.074830] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:46.567 [2024-12-09 10:20:31.074880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.567 [2024-12-09 10:20:31.074906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a3cb0 00:09:46.567 [2024-12-09 10:20:31.074939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.567 [2024-12-09 10:20:31.077700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.567 [2024-12-09 10:20:31.077785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:46.567 Passthru0 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:46.567 { 00:09:46.567 "name": "Malloc2", 00:09:46.567 "aliases": [ 00:09:46.567 "52c4fb53-951b-437f-ab24-89f434a13c41" 00:09:46.567 ], 00:09:46.567 "product_name": "Malloc disk", 00:09:46.567 "block_size": 512, 00:09:46.567 "num_blocks": 16384, 00:09:46.567 "uuid": "52c4fb53-951b-437f-ab24-89f434a13c41", 00:09:46.567 "assigned_rate_limits": { 00:09:46.567 "rw_ios_per_sec": 0, 00:09:46.567 "rw_mbytes_per_sec": 0, 00:09:46.567 "r_mbytes_per_sec": 0, 00:09:46.567 "w_mbytes_per_sec": 0 00:09:46.567 }, 00:09:46.567 "claimed": true, 00:09:46.567 "claim_type": "exclusive_write", 00:09:46.567 "zoned": false, 00:09:46.567 "supported_io_types": { 00:09:46.567 "read": true, 00:09:46.567 "write": true, 00:09:46.567 "unmap": true, 00:09:46.567 "flush": true, 00:09:46.567 "reset": true, 00:09:46.567 "nvme_admin": false, 00:09:46.567 "nvme_io": false, 00:09:46.567 "nvme_io_md": false, 00:09:46.567 "write_zeroes": true, 00:09:46.567 "zcopy": true, 00:09:46.567 "get_zone_info": false, 00:09:46.567 "zone_management": false, 00:09:46.567 "zone_append": false, 00:09:46.567 "compare": false, 00:09:46.567 "compare_and_write": false, 00:09:46.567 "abort": true, 00:09:46.567 "seek_hole": false, 00:09:46.567 "seek_data": false, 00:09:46.567 "copy": true, 00:09:46.567 "nvme_iov_md": false 00:09:46.567 }, 00:09:46.567 "memory_domains": [ 00:09:46.567 { 00:09:46.567 "dma_device_id": "system", 00:09:46.567 "dma_device_type": 1 00:09:46.567 }, 00:09:46.567 { 00:09:46.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.567 "dma_device_type": 2 00:09:46.567 } 00:09:46.567 ], 00:09:46.567 "driver_specific": {} 00:09:46.567 }, 00:09:46.567 { 00:09:46.567 "name": "Passthru0", 00:09:46.567 "aliases": [ 00:09:46.567 "88c19a2a-1a8a-5d89-8d05-bf7697ba6d59" 00:09:46.567 ], 00:09:46.567 "product_name": "passthru", 00:09:46.567 "block_size": 512, 00:09:46.567 "num_blocks": 16384, 00:09:46.567 "uuid": "88c19a2a-1a8a-5d89-8d05-bf7697ba6d59", 00:09:46.567 "assigned_rate_limits": { 00:09:46.567 "rw_ios_per_sec": 0, 00:09:46.567 "rw_mbytes_per_sec": 0, 00:09:46.567 "r_mbytes_per_sec": 0, 00:09:46.567 "w_mbytes_per_sec": 0 00:09:46.567 }, 00:09:46.567 "claimed": false, 00:09:46.567 "zoned": false, 00:09:46.567 "supported_io_types": { 00:09:46.567 "read": true, 00:09:46.567 "write": true, 00:09:46.567 "unmap": true, 00:09:46.567 "flush": true, 00:09:46.567 "reset": true, 00:09:46.567 "nvme_admin": false, 00:09:46.567 "nvme_io": false, 00:09:46.567 "nvme_io_md": false, 00:09:46.567 "write_zeroes": true, 00:09:46.567 "zcopy": true, 00:09:46.567 "get_zone_info": false, 00:09:46.567 "zone_management": false, 00:09:46.567 "zone_append": false, 00:09:46.567 "compare": false, 00:09:46.567 "compare_and_write": false, 00:09:46.567 "abort": true, 00:09:46.567 "seek_hole": false, 00:09:46.567 "seek_data": false, 00:09:46.567 "copy": true, 00:09:46.567 "nvme_iov_md": false 00:09:46.567 }, 00:09:46.567 "memory_domains": [ 00:09:46.567 { 00:09:46.567 "dma_device_id": "system", 00:09:46.567 "dma_device_type": 1 00:09:46.567 }, 00:09:46.567 { 00:09:46.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.567 "dma_device_type": 2 00:09:46.567 } 00:09:46.567 ], 00:09:46.567 "driver_specific": { 00:09:46.567 "passthru": { 00:09:46.567 "name": "Passthru0", 00:09:46.567 "base_bdev_name": "Malloc2" 00:09:46.567 } 00:09:46.567 } 00:09:46.567 } 00:09:46.567 ]' 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.567 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:46.826 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:46.826 10:20:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:46.826 00:09:46.826 real 0m0.451s 00:09:46.826 user 0m0.338s 00:09:46.826 sys 0m0.054s 00:09:46.826 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.826 10:20:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:46.826 ************************************ 00:09:46.826 END TEST rpc_daemon_integrity 00:09:46.826 ************************************ 00:09:46.826 10:20:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:46.826 10:20:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1961267 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 1961267 ']' 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@958 -- # kill -0 1961267 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@959 -- # uname 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961267 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961267' 00:09:46.826 killing process with pid 1961267 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@973 -- # kill 1961267 00:09:46.826 10:20:31 rpc -- common/autotest_common.sh@978 -- # wait 1961267 00:09:47.393 00:09:47.393 real 0m3.384s 00:09:47.393 user 0m4.510s 00:09:47.393 sys 0m1.040s 00:09:47.393 10:20:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.393 10:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.393 ************************************ 00:09:47.393 END TEST rpc 00:09:47.393 ************************************ 00:09:47.651 10:20:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:47.651 10:20:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.651 10:20:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.651 10:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:47.651 ************************************ 00:09:47.651 START TEST skip_rpc 00:09:47.651 ************************************ 00:09:47.651 10:20:32 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:47.651 * Looking for test storage... 00:09:47.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:47.651 10:20:32 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.651 10:20:32 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.651 10:20:32 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.909 10:20:32 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.909 10:20:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:47.909 10:20:32 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.909 10:20:32 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.909 --rc genhtml_branch_coverage=1 00:09:47.909 --rc genhtml_function_coverage=1 00:09:47.909 --rc genhtml_legend=1 00:09:47.909 --rc geninfo_all_blocks=1 00:09:47.909 --rc geninfo_unexecuted_blocks=1 00:09:47.910 00:09:47.910 ' 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.910 --rc genhtml_branch_coverage=1 00:09:47.910 --rc genhtml_function_coverage=1 00:09:47.910 --rc genhtml_legend=1 00:09:47.910 --rc geninfo_all_blocks=1 00:09:47.910 --rc geninfo_unexecuted_blocks=1 00:09:47.910 00:09:47.910 ' 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.910 --rc genhtml_branch_coverage=1 00:09:47.910 --rc genhtml_function_coverage=1 00:09:47.910 --rc genhtml_legend=1 00:09:47.910 --rc geninfo_all_blocks=1 00:09:47.910 --rc geninfo_unexecuted_blocks=1 00:09:47.910 00:09:47.910 ' 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.910 --rc genhtml_branch_coverage=1 00:09:47.910 --rc genhtml_function_coverage=1 00:09:47.910 --rc genhtml_legend=1 00:09:47.910 --rc geninfo_all_blocks=1 00:09:47.910 --rc geninfo_unexecuted_blocks=1 00:09:47.910 00:09:47.910 ' 00:09:47.910 10:20:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:47.910 10:20:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:47.910 10:20:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.910 10:20:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.910 ************************************ 00:09:47.910 START TEST skip_rpc 00:09:47.910 ************************************ 00:09:47.910 10:20:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:47.910 10:20:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1961847 00:09:47.910 10:20:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:47.910 10:20:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.910 10:20:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:47.910 [2024-12-09 10:20:32.564611] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:09:48.169 [2024-12-09 10:20:32.564697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961847 ] 00:09:48.169 [2024-12-09 10:20:32.737748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.426 [2024-12-09 10:20:32.860914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1961847 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1961847 ']' 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1961847 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961847 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961847' 00:09:53.691 killing process with pid 1961847 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1961847 00:09:53.691 10:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1961847 00:09:53.691 00:09:53.691 real 0m5.728s 00:09:53.691 user 0m5.161s 00:09:53.691 sys 0m0.586s 00:09:53.691 10:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.691 10:20:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.691 ************************************ 00:09:53.691 END TEST skip_rpc 00:09:53.691 ************************************ 00:09:53.692 10:20:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:53.692 10:20:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.692 10:20:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.692 10:20:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.692 ************************************ 00:09:53.692 START TEST skip_rpc_with_json 00:09:53.692 ************************************ 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1962538 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1962538 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1962538 ']' 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.692 10:20:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:53.692 [2024-12-09 10:20:38.313468] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:09:53.692 [2024-12-09 10:20:38.313579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962538 ] 00:09:53.949 [2024-12-09 10:20:38.452329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.949 [2024-12-09 10:20:38.567198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.514 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.514 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:54.514 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:54.514 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.514 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:54.514 [2024-12-09 10:20:39.077887] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:54.514 request: 00:09:54.514 { 00:09:54.514 "trtype": "tcp", 00:09:54.514 "method": "nvmf_get_transports", 00:09:54.514 "req_id": 1 00:09:54.514 } 00:09:54.515 Got JSON-RPC error response 00:09:54.515 response: 00:09:54.515 { 00:09:54.515 "code": -19, 00:09:54.515 "message": "No such device" 00:09:54.515 } 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:54.515 [2024-12-09 10:20:39.090167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.515 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:54.786 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.787 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:54.787 { 00:09:54.787 "subsystems": [ 00:09:54.787 { 00:09:54.787 "subsystem": "fsdev", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "fsdev_set_opts", 00:09:54.787 "params": { 00:09:54.787 "fsdev_io_pool_size": 65535, 00:09:54.787 "fsdev_io_cache_size": 256 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "vfio_user_target", 00:09:54.787 "config": null 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "keyring", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "iobuf", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "iobuf_set_options", 00:09:54.787 "params": { 00:09:54.787 "small_pool_count": 8192, 00:09:54.787 "large_pool_count": 1024, 00:09:54.787 "small_bufsize": 8192, 00:09:54.787 "large_bufsize": 135168, 00:09:54.787 "enable_numa": false 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "sock", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "sock_set_default_impl", 00:09:54.787 "params": { 00:09:54.787 "impl_name": "posix" 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "sock_impl_set_options", 00:09:54.787 "params": { 00:09:54.787 "impl_name": "ssl", 00:09:54.787 "recv_buf_size": 4096, 00:09:54.787 "send_buf_size": 4096, 00:09:54.787 "enable_recv_pipe": true, 00:09:54.787 "enable_quickack": false, 00:09:54.787 "enable_placement_id": 0, 00:09:54.787 "enable_zerocopy_send_server": true, 00:09:54.787 "enable_zerocopy_send_client": false, 00:09:54.787 "zerocopy_threshold": 0, 00:09:54.787 "tls_version": 0, 00:09:54.787 "enable_ktls": false 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "sock_impl_set_options", 00:09:54.787 "params": { 00:09:54.787 "impl_name": "posix", 00:09:54.787 "recv_buf_size": 2097152, 00:09:54.787 "send_buf_size": 2097152, 00:09:54.787 "enable_recv_pipe": true, 00:09:54.787 "enable_quickack": false, 00:09:54.787 "enable_placement_id": 0, 00:09:54.787 "enable_zerocopy_send_server": true, 00:09:54.787 "enable_zerocopy_send_client": false, 00:09:54.787 "zerocopy_threshold": 0, 00:09:54.787 "tls_version": 0, 00:09:54.787 "enable_ktls": false 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "vmd", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "accel", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "accel_set_options", 00:09:54.787 "params": { 00:09:54.787 "small_cache_size": 128, 00:09:54.787 "large_cache_size": 16, 00:09:54.787 "task_count": 2048, 00:09:54.787 "sequence_count": 2048, 00:09:54.787 "buf_count": 2048 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "bdev", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "bdev_set_options", 00:09:54.787 "params": { 00:09:54.787 "bdev_io_pool_size": 65535, 00:09:54.787 "bdev_io_cache_size": 256, 00:09:54.787 "bdev_auto_examine": true, 00:09:54.787 "iobuf_small_cache_size": 128, 00:09:54.787 "iobuf_large_cache_size": 16 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "bdev_raid_set_options", 00:09:54.787 "params": { 00:09:54.787 "process_window_size_kb": 1024, 00:09:54.787 "process_max_bandwidth_mb_sec": 0 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "bdev_iscsi_set_options", 00:09:54.787 "params": { 00:09:54.787 "timeout_sec": 30 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "bdev_nvme_set_options", 00:09:54.787 "params": { 00:09:54.787 "action_on_timeout": "none", 00:09:54.787 "timeout_us": 0, 00:09:54.787 "timeout_admin_us": 0, 00:09:54.787 "keep_alive_timeout_ms": 10000, 00:09:54.787 "arbitration_burst": 0, 00:09:54.787 "low_priority_weight": 0, 00:09:54.787 "medium_priority_weight": 0, 00:09:54.787 "high_priority_weight": 0, 00:09:54.787 "nvme_adminq_poll_period_us": 10000, 00:09:54.787 "nvme_ioq_poll_period_us": 0, 00:09:54.787 "io_queue_requests": 0, 00:09:54.787 "delay_cmd_submit": true, 00:09:54.787 "transport_retry_count": 4, 00:09:54.787 "bdev_retry_count": 3, 00:09:54.787 "transport_ack_timeout": 0, 00:09:54.787 "ctrlr_loss_timeout_sec": 0, 00:09:54.787 "reconnect_delay_sec": 0, 00:09:54.787 "fast_io_fail_timeout_sec": 0, 00:09:54.787 "disable_auto_failback": false, 00:09:54.787 "generate_uuids": false, 00:09:54.787 "transport_tos": 0, 00:09:54.787 "nvme_error_stat": false, 00:09:54.787 "rdma_srq_size": 0, 00:09:54.787 "io_path_stat": false, 00:09:54.787 "allow_accel_sequence": false, 00:09:54.787 "rdma_max_cq_size": 0, 00:09:54.787 "rdma_cm_event_timeout_ms": 0, 00:09:54.787 "dhchap_digests": [ 00:09:54.787 "sha256", 00:09:54.787 "sha384", 00:09:54.787 "sha512" 00:09:54.787 ], 00:09:54.787 "dhchap_dhgroups": [ 00:09:54.787 "null", 00:09:54.787 "ffdhe2048", 00:09:54.787 "ffdhe3072", 00:09:54.787 "ffdhe4096", 00:09:54.787 "ffdhe6144", 00:09:54.787 "ffdhe8192" 00:09:54.787 ] 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "bdev_nvme_set_hotplug", 00:09:54.787 "params": { 00:09:54.787 "period_us": 100000, 00:09:54.787 "enable": false 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "bdev_wait_for_examine" 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "scsi", 00:09:54.787 "config": null 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "scheduler", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "framework_set_scheduler", 00:09:54.787 "params": { 00:09:54.787 "name": "static" 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "vhost_scsi", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "vhost_blk", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "ublk", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "nbd", 00:09:54.787 "config": [] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "nvmf", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "nvmf_set_config", 00:09:54.787 "params": { 00:09:54.787 "discovery_filter": "match_any", 00:09:54.787 "admin_cmd_passthru": { 00:09:54.787 "identify_ctrlr": false 00:09:54.787 }, 00:09:54.787 "dhchap_digests": [ 00:09:54.787 "sha256", 00:09:54.787 "sha384", 00:09:54.787 "sha512" 00:09:54.787 ], 00:09:54.787 "dhchap_dhgroups": [ 00:09:54.787 "null", 00:09:54.787 "ffdhe2048", 00:09:54.787 "ffdhe3072", 00:09:54.787 "ffdhe4096", 00:09:54.787 "ffdhe6144", 00:09:54.787 "ffdhe8192" 00:09:54.787 ] 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "nvmf_set_max_subsystems", 00:09:54.787 "params": { 00:09:54.787 "max_subsystems": 1024 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "nvmf_set_crdt", 00:09:54.787 "params": { 00:09:54.787 "crdt1": 0, 00:09:54.787 "crdt2": 0, 00:09:54.787 "crdt3": 0 00:09:54.787 } 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "method": "nvmf_create_transport", 00:09:54.787 "params": { 00:09:54.787 "trtype": "TCP", 00:09:54.787 "max_queue_depth": 128, 00:09:54.787 "max_io_qpairs_per_ctrlr": 127, 00:09:54.787 "in_capsule_data_size": 4096, 00:09:54.787 "max_io_size": 131072, 00:09:54.787 "io_unit_size": 131072, 00:09:54.787 "max_aq_depth": 128, 00:09:54.787 "num_shared_buffers": 511, 00:09:54.787 "buf_cache_size": 4294967295, 00:09:54.787 "dif_insert_or_strip": false, 00:09:54.787 "zcopy": false, 00:09:54.787 "c2h_success": true, 00:09:54.787 "sock_priority": 0, 00:09:54.787 "abort_timeout_sec": 1, 00:09:54.787 "ack_timeout": 0, 00:09:54.787 "data_wr_pool_size": 0 00:09:54.787 } 00:09:54.787 } 00:09:54.787 ] 00:09:54.787 }, 00:09:54.787 { 00:09:54.787 "subsystem": "iscsi", 00:09:54.787 "config": [ 00:09:54.787 { 00:09:54.787 "method": "iscsi_set_options", 00:09:54.787 "params": { 00:09:54.787 "node_base": "iqn.2016-06.io.spdk", 00:09:54.787 "max_sessions": 128, 00:09:54.787 "max_connections_per_session": 2, 00:09:54.787 "max_queue_depth": 64, 00:09:54.787 "default_time2wait": 2, 00:09:54.788 "default_time2retain": 20, 00:09:54.788 "first_burst_length": 8192, 00:09:54.788 "immediate_data": true, 00:09:54.788 "allow_duplicated_isid": false, 00:09:54.788 "error_recovery_level": 0, 00:09:54.788 "nop_timeout": 60, 00:09:54.788 "nop_in_interval": 30, 00:09:54.788 "disable_chap": false, 00:09:54.788 "require_chap": false, 00:09:54.788 "mutual_chap": false, 00:09:54.788 "chap_group": 0, 00:09:54.788 "max_large_datain_per_connection": 64, 00:09:54.788 "max_r2t_per_connection": 4, 00:09:54.788 "pdu_pool_size": 36864, 00:09:54.788 "immediate_data_pool_size": 16384, 00:09:54.788 "data_out_pool_size": 2048 00:09:54.788 } 00:09:54.788 } 00:09:54.788 ] 00:09:54.788 } 00:09:54.788 ] 00:09:54.788 } 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1962538 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1962538 ']' 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1962538 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962538 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962538' 00:09:54.788 killing process with pid 1962538 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1962538 00:09:54.788 10:20:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1962538 00:09:55.722 10:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1962792 00:09:55.722 10:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:55.722 10:20:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1962792 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1962792 ']' 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1962792 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962792 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962792' 00:10:00.994 killing process with pid 1962792 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1962792 00:10:00.994 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1962792 00:10:01.253 10:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:01.253 10:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:10:01.253 00:10:01.253 real 0m7.636s 00:10:01.253 user 0m6.926s 00:10:01.253 sys 0m1.240s 00:10:01.253 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.253 10:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:01.253 ************************************ 00:10:01.253 END TEST skip_rpc_with_json 00:10:01.253 ************************************ 00:10:01.253 10:20:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:01.512 10:20:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.512 10:20:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.512 10:20:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.512 ************************************ 00:10:01.512 START TEST skip_rpc_with_delay 00:10:01.512 ************************************ 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:01.512 10:20:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:01.512 [2024-12-09 10:20:46.089225] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:01.512 00:10:01.512 real 0m0.178s 00:10:01.512 user 0m0.117s 00:10:01.512 sys 0m0.059s 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.512 10:20:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:01.512 ************************************ 00:10:01.512 END TEST skip_rpc_with_delay 00:10:01.512 ************************************ 00:10:01.771 10:20:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:01.771 10:20:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:01.771 10:20:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:01.771 10:20:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.771 10:20:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.771 10:20:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 ************************************ 00:10:01.771 START TEST exit_on_failed_rpc_init 00:10:01.771 ************************************ 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1963529 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1963529 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1963529 ']' 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.771 10:20:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 [2024-12-09 10:20:46.289112] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:01.771 [2024-12-09 10:20:46.289215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963529 ] 00:10:01.771 [2024-12-09 10:20:46.420424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.030 [2024-12-09 10:20:46.540431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:10:02.600 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:10:02.600 [2024-12-09 10:20:47.114432] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:02.600 [2024-12-09 10:20:47.114541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963549 ] 00:10:02.600 [2024-12-09 10:20:47.241261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.859 [2024-12-09 10:20:47.357046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.859 [2024-12-09 10:20:47.357261] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:02.859 [2024-12-09 10:20:47.357312] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:02.859 [2024-12-09 10:20:47.357341] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1963529 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1963529 ']' 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1963529 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.859 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963529 00:10:03.117 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.117 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.117 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963529' 00:10:03.117 killing process with pid 1963529 00:10:03.117 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1963529 00:10:03.117 10:20:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1963529 00:10:03.683 00:10:03.683 real 0m2.023s 00:10:03.683 user 0m2.135s 00:10:03.683 sys 0m0.733s 00:10:03.683 10:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.683 10:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:03.683 ************************************ 00:10:03.683 END TEST exit_on_failed_rpc_init 00:10:03.683 ************************************ 00:10:03.683 10:20:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:10:03.683 00:10:03.683 real 0m16.188s 00:10:03.683 user 0m14.651s 00:10:03.683 sys 0m2.958s 00:10:03.683 10:20:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.683 10:20:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.683 ************************************ 00:10:03.683 END TEST skip_rpc 00:10:03.683 ************************************ 00:10:03.683 10:20:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:03.683 10:20:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.683 10:20:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.683 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:10:03.943 ************************************ 00:10:03.944 START TEST rpc_client 00:10:03.944 ************************************ 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:10:03.944 * Looking for test storage... 00:10:03.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.944 10:20:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.944 --rc genhtml_branch_coverage=1 00:10:03.944 --rc genhtml_function_coverage=1 00:10:03.944 --rc genhtml_legend=1 00:10:03.944 --rc geninfo_all_blocks=1 00:10:03.944 --rc geninfo_unexecuted_blocks=1 00:10:03.944 00:10:03.944 ' 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.944 --rc genhtml_branch_coverage=1 00:10:03.944 --rc genhtml_function_coverage=1 00:10:03.944 --rc genhtml_legend=1 00:10:03.944 --rc geninfo_all_blocks=1 00:10:03.944 --rc geninfo_unexecuted_blocks=1 00:10:03.944 00:10:03.944 ' 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.944 --rc genhtml_branch_coverage=1 00:10:03.944 --rc genhtml_function_coverage=1 00:10:03.944 --rc genhtml_legend=1 00:10:03.944 --rc geninfo_all_blocks=1 00:10:03.944 --rc geninfo_unexecuted_blocks=1 00:10:03.944 00:10:03.944 ' 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.944 --rc genhtml_branch_coverage=1 00:10:03.944 --rc genhtml_function_coverage=1 00:10:03.944 --rc genhtml_legend=1 00:10:03.944 --rc geninfo_all_blocks=1 00:10:03.944 --rc geninfo_unexecuted_blocks=1 00:10:03.944 00:10:03.944 ' 00:10:03.944 10:20:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:10:03.944 OK 00:10:03.944 10:20:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:03.944 00:10:03.944 real 0m0.227s 00:10:03.944 user 0m0.141s 00:10:03.944 sys 0m0.097s 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.944 10:20:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:03.944 ************************************ 00:10:03.944 END TEST rpc_client 00:10:03.944 ************************************ 00:10:04.203 10:20:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:04.203 10:20:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.203 10:20:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.203 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:10:04.203 ************************************ 00:10:04.203 START TEST json_config 00:10:04.203 ************************************ 00:10:04.203 10:20:48 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:10:04.203 10:20:48 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.203 10:20:48 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.203 10:20:48 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.462 10:20:48 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.462 10:20:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.462 10:20:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.462 10:20:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.462 10:20:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.462 10:20:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.462 10:20:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.462 10:20:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.462 10:20:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.462 10:20:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.462 10:20:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.462 10:20:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.462 10:20:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:04.462 10:20:48 json_config -- scripts/common.sh@345 -- # : 1 00:10:04.462 10:20:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.462 10:20:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.462 10:20:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:04.463 10:20:48 json_config -- scripts/common.sh@353 -- # local d=1 00:10:04.463 10:20:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.463 10:20:48 json_config -- scripts/common.sh@355 -- # echo 1 00:10:04.463 10:20:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.463 10:20:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:04.463 10:20:48 json_config -- scripts/common.sh@353 -- # local d=2 00:10:04.463 10:20:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.463 10:20:48 json_config -- scripts/common.sh@355 -- # echo 2 00:10:04.463 10:20:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.463 10:20:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.463 10:20:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.463 10:20:48 json_config -- scripts/common.sh@368 -- # return 0 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.463 --rc genhtml_branch_coverage=1 00:10:04.463 --rc genhtml_function_coverage=1 00:10:04.463 --rc genhtml_legend=1 00:10:04.463 --rc geninfo_all_blocks=1 00:10:04.463 --rc geninfo_unexecuted_blocks=1 00:10:04.463 00:10:04.463 ' 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.463 --rc genhtml_branch_coverage=1 00:10:04.463 --rc genhtml_function_coverage=1 00:10:04.463 --rc genhtml_legend=1 00:10:04.463 --rc geninfo_all_blocks=1 00:10:04.463 --rc geninfo_unexecuted_blocks=1 00:10:04.463 00:10:04.463 ' 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.463 --rc genhtml_branch_coverage=1 00:10:04.463 --rc genhtml_function_coverage=1 00:10:04.463 --rc genhtml_legend=1 00:10:04.463 --rc geninfo_all_blocks=1 00:10:04.463 --rc geninfo_unexecuted_blocks=1 00:10:04.463 00:10:04.463 ' 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.463 --rc genhtml_branch_coverage=1 00:10:04.463 --rc genhtml_function_coverage=1 00:10:04.463 --rc genhtml_legend=1 00:10:04.463 --rc geninfo_all_blocks=1 00:10:04.463 --rc geninfo_unexecuted_blocks=1 00:10:04.463 00:10:04.463 ' 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.463 10:20:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.463 10:20:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.463 10:20:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.463 10:20:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.463 10:20:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.463 10:20:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.463 10:20:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.463 10:20:48 json_config -- paths/export.sh@5 -- # export PATH 00:10:04.463 10:20:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@51 -- # : 0 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.463 10:20:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:10:04.463 INFO: JSON configuration test init 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.463 10:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.463 10:20:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:10:04.463 10:20:48 json_config -- json_config/common.sh@9 -- # local app=target 00:10:04.463 10:20:48 json_config -- json_config/common.sh@10 -- # shift 00:10:04.463 10:20:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:04.463 10:20:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:04.463 10:20:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:04.463 10:20:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.463 10:20:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.463 10:20:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1963925 00:10:04.463 10:20:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:04.464 10:20:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:04.464 Waiting for target to run... 00:10:04.464 10:20:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1963925 /var/tmp/spdk_tgt.sock 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 1963925 ']' 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:04.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.464 10:20:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 [2024-12-09 10:20:49.067544] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:04.464 [2024-12-09 10:20:49.067743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963925 ] 00:10:05.399 [2024-12-09 10:20:49.877093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.399 [2024-12-09 10:20:49.981549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:05.972 10:20:50 json_config -- json_config/common.sh@26 -- # echo '' 00:10:05.972 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.972 10:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:05.972 10:20:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:10:05.972 10:20:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:10.235 10:20:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@54 -- # sort 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.235 10:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:10:10.235 10:20:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:10.235 10:20:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:10.494 MallocForNvmf0 00:10:10.494 10:20:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:10.494 10:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:10.752 MallocForNvmf1 00:10:11.010 10:20:55 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:11.010 10:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:11.269 [2024-12-09 10:20:55.806116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.269 10:20:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.269 10:20:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.205 10:20:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:12.205 10:20:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:12.463 10:20:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:12.464 10:20:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:12.721 10:20:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:12.721 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:12.980 [2024-12-09 10:20:57.536993] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:12.980 10:20:57 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:10:12.980 10:20:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.980 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:12.980 10:20:57 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:10:12.980 10:20:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.980 10:20:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:12.980 10:20:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:10:12.980 10:20:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:12.980 10:20:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:13.915 MallocBdevForConfigChangeCheck 00:10:13.915 10:20:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:10:13.915 10:20:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.915 10:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:13.915 10:20:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:10:13.915 10:20:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:14.172 10:20:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:10:14.172 INFO: shutting down applications... 00:10:14.172 10:20:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:10:14.172 10:20:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:10:14.172 10:20:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:10:14.172 10:20:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:16.071 Calling clear_iscsi_subsystem 00:10:16.071 Calling clear_nvmf_subsystem 00:10:16.071 Calling clear_nbd_subsystem 00:10:16.071 Calling clear_ublk_subsystem 00:10:16.071 Calling clear_vhost_blk_subsystem 00:10:16.071 Calling clear_vhost_scsi_subsystem 00:10:16.071 Calling clear_bdev_subsystem 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:16.071 10:21:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:16.639 10:21:01 json_config -- json_config/json_config.sh@352 -- # break 00:10:16.639 10:21:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:16.639 10:21:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:16.639 10:21:01 json_config -- json_config/common.sh@31 -- # local app=target 00:10:16.639 10:21:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:16.639 10:21:01 json_config -- json_config/common.sh@35 -- # [[ -n 1963925 ]] 00:10:16.639 10:21:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1963925 00:10:16.639 10:21:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:16.639 10:21:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.639 10:21:01 json_config -- json_config/common.sh@41 -- # kill -0 1963925 00:10:16.639 10:21:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.899 10:21:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.899 10:21:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.899 10:21:01 json_config -- json_config/common.sh@41 -- # kill -0 1963925 00:10:16.899 10:21:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:16.899 10:21:01 json_config -- json_config/common.sh@43 -- # break 00:10:16.899 10:21:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:16.899 10:21:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:16.899 SPDK target shutdown done 00:10:16.899 10:21:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:16.899 INFO: relaunching applications... 00:10:16.899 10:21:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:16.899 10:21:01 json_config -- json_config/common.sh@9 -- # local app=target 00:10:16.899 10:21:01 json_config -- json_config/common.sh@10 -- # shift 00:10:16.899 10:21:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:16.899 10:21:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:16.899 10:21:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:16.899 10:21:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:16.899 10:21:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:16.899 10:21:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1965604 00:10:16.899 10:21:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:16.899 10:21:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:16.899 Waiting for target to run... 00:10:16.899 10:21:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1965604 /var/tmp/spdk_tgt.sock 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 1965604 ']' 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:16.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.899 10:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:17.158 [2024-12-09 10:21:01.617252] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:17.158 [2024-12-09 10:21:01.617369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965604 ] 00:10:17.723 [2024-12-09 10:21:02.324454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.981 [2024-12-09 10:21:02.429665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.266 [2024-12-09 10:21:05.593665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.266 [2024-12-09 10:21:05.626501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:21.266 10:21:05 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.266 10:21:05 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:21.266 10:21:05 json_config -- json_config/common.sh@26 -- # echo '' 00:10:21.266 00:10:21.266 10:21:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:21.266 10:21:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:21.266 INFO: Checking if target configuration is the same... 00:10:21.267 10:21:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:21.267 10:21:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:21.267 10:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:21.267 + '[' 2 -ne 2 ']' 00:10:21.267 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:21.267 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:21.267 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:21.267 +++ basename /dev/fd/62 00:10:21.267 ++ mktemp /tmp/62.XXX 00:10:21.267 + tmp_file_1=/tmp/62.8iY 00:10:21.267 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:21.267 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:21.267 + tmp_file_2=/tmp/spdk_tgt_config.json.6eY 00:10:21.267 + ret=0 00:10:21.267 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:21.834 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:21.834 + diff -u /tmp/62.8iY /tmp/spdk_tgt_config.json.6eY 00:10:21.834 + echo 'INFO: JSON config files are the same' 00:10:21.834 INFO: JSON config files are the same 00:10:21.834 + rm /tmp/62.8iY /tmp/spdk_tgt_config.json.6eY 00:10:21.834 + exit 0 00:10:21.834 10:21:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:21.834 10:21:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:21.834 INFO: changing configuration and checking if this can be detected... 00:10:21.834 10:21:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:21.834 10:21:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:22.767 10:21:07 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:22.767 10:21:07 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:22.767 10:21:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:22.767 + '[' 2 -ne 2 ']' 00:10:22.767 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:22.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:22.767 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:22.767 +++ basename /dev/fd/62 00:10:22.767 ++ mktemp /tmp/62.XXX 00:10:22.767 + tmp_file_1=/tmp/62.rff 00:10:22.767 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:22.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:22.767 + tmp_file_2=/tmp/spdk_tgt_config.json.0Dj 00:10:22.767 + ret=0 00:10:22.767 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:23.697 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:23.697 + diff -u /tmp/62.rff /tmp/spdk_tgt_config.json.0Dj 00:10:23.697 + ret=1 00:10:23.697 + echo '=== Start of file: /tmp/62.rff ===' 00:10:23.697 + cat /tmp/62.rff 00:10:23.697 + echo '=== End of file: /tmp/62.rff ===' 00:10:23.697 + echo '' 00:10:23.697 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0Dj ===' 00:10:23.697 + cat /tmp/spdk_tgt_config.json.0Dj 00:10:23.697 + echo '=== End of file: /tmp/spdk_tgt_config.json.0Dj ===' 00:10:23.697 + echo '' 00:10:23.697 + rm /tmp/62.rff /tmp/spdk_tgt_config.json.0Dj 00:10:23.697 + exit 1 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:23.697 INFO: configuration change detected. 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@324 -- # [[ -n 1965604 ]] 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:23.697 10:21:08 json_config -- json_config/json_config.sh@330 -- # killprocess 1965604 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@954 -- # '[' -z 1965604 ']' 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@958 -- # kill -0 1965604 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@959 -- # uname 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965604 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965604' 00:10:23.697 killing process with pid 1965604 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@973 -- # kill 1965604 00:10:23.697 10:21:08 json_config -- common/autotest_common.sh@978 -- # wait 1965604 00:10:25.599 10:21:09 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:25.599 10:21:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:25.599 10:21:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:25.599 10:21:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.599 10:21:09 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:25.599 10:21:09 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:25.599 INFO: Success 00:10:25.599 00:10:25.599 real 0m21.286s 00:10:25.599 user 0m26.346s 00:10:25.599 sys 0m4.014s 00:10:25.599 10:21:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.599 10:21:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:25.599 ************************************ 00:10:25.599 END TEST json_config 00:10:25.599 ************************************ 00:10:25.599 10:21:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:25.599 10:21:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:25.599 10:21:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.599 10:21:09 -- common/autotest_common.sh@10 -- # set +x 00:10:25.599 ************************************ 00:10:25.599 START TEST json_config_extra_key 00:10:25.599 ************************************ 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.599 10:21:10 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.599 --rc genhtml_branch_coverage=1 00:10:25.599 --rc genhtml_function_coverage=1 00:10:25.599 --rc genhtml_legend=1 00:10:25.599 --rc geninfo_all_blocks=1 00:10:25.599 --rc geninfo_unexecuted_blocks=1 00:10:25.599 00:10:25.599 ' 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.599 --rc genhtml_branch_coverage=1 00:10:25.599 --rc genhtml_function_coverage=1 00:10:25.599 --rc genhtml_legend=1 00:10:25.599 --rc geninfo_all_blocks=1 00:10:25.599 --rc geninfo_unexecuted_blocks=1 00:10:25.599 00:10:25.599 ' 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.599 --rc genhtml_branch_coverage=1 00:10:25.599 --rc genhtml_function_coverage=1 00:10:25.599 --rc genhtml_legend=1 00:10:25.599 --rc geninfo_all_blocks=1 00:10:25.599 --rc geninfo_unexecuted_blocks=1 00:10:25.599 00:10:25.599 ' 00:10:25.599 10:21:10 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.599 --rc genhtml_branch_coverage=1 00:10:25.599 --rc genhtml_function_coverage=1 00:10:25.599 --rc genhtml_legend=1 00:10:25.599 --rc geninfo_all_blocks=1 00:10:25.599 --rc geninfo_unexecuted_blocks=1 00:10:25.599 00:10:25.599 ' 00:10:25.599 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.599 10:21:10 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.600 10:21:10 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.600 10:21:10 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.600 10:21:10 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.600 10:21:10 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.600 10:21:10 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.600 10:21:10 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.600 10:21:10 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.600 10:21:10 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:25.600 10:21:10 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.600 10:21:10 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:25.600 INFO: launching applications... 00:10:25.600 10:21:10 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1967188 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:25.600 Waiting for target to run... 00:10:25.600 10:21:10 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1967188 /var/tmp/spdk_tgt.sock 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1967188 ']' 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:25.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.600 10:21:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 [2024-12-09 10:21:10.268744] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:25.859 [2024-12-09 10:21:10.268846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967188 ] 00:10:26.427 [2024-12-09 10:21:10.827169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.427 [2024-12-09 10:21:10.921580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.995 10:21:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.995 10:21:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:26.995 00:10:26.995 10:21:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:26.995 INFO: shutting down applications... 00:10:26.995 10:21:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1967188 ]] 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1967188 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1967188 00:10:26.995 10:21:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:27.561 10:21:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:27.561 10:21:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:27.561 10:21:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1967188 00:10:27.561 10:21:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1967188 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:27.820 10:21:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:27.820 SPDK target shutdown done 00:10:27.820 10:21:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:27.820 Success 00:10:27.820 00:10:27.820 real 0m2.411s 00:10:27.820 user 0m2.074s 00:10:27.820 sys 0m0.677s 00:10:27.820 10:21:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.820 10:21:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:27.820 ************************************ 00:10:27.820 END TEST json_config_extra_key 00:10:27.820 ************************************ 00:10:27.820 10:21:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:27.820 10:21:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.820 10:21:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.820 10:21:12 -- common/autotest_common.sh@10 -- # set +x 00:10:28.079 ************************************ 00:10:28.079 START TEST alias_rpc 00:10:28.079 ************************************ 00:10:28.079 10:21:12 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:28.079 * Looking for test storage... 00:10:28.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:10:28.079 10:21:12 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.079 10:21:12 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.079 10:21:12 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.338 10:21:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.338 --rc genhtml_branch_coverage=1 00:10:28.338 --rc genhtml_function_coverage=1 00:10:28.338 --rc genhtml_legend=1 00:10:28.338 --rc geninfo_all_blocks=1 00:10:28.338 --rc geninfo_unexecuted_blocks=1 00:10:28.338 00:10:28.338 ' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.338 --rc genhtml_branch_coverage=1 00:10:28.338 --rc genhtml_function_coverage=1 00:10:28.338 --rc genhtml_legend=1 00:10:28.338 --rc geninfo_all_blocks=1 00:10:28.338 --rc geninfo_unexecuted_blocks=1 00:10:28.338 00:10:28.338 ' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.338 --rc genhtml_branch_coverage=1 00:10:28.338 --rc genhtml_function_coverage=1 00:10:28.338 --rc genhtml_legend=1 00:10:28.338 --rc geninfo_all_blocks=1 00:10:28.338 --rc geninfo_unexecuted_blocks=1 00:10:28.338 00:10:28.338 ' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.338 --rc genhtml_branch_coverage=1 00:10:28.338 --rc genhtml_function_coverage=1 00:10:28.338 --rc genhtml_legend=1 00:10:28.338 --rc geninfo_all_blocks=1 00:10:28.338 --rc geninfo_unexecuted_blocks=1 00:10:28.338 00:10:28.338 ' 00:10:28.338 10:21:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:28.338 10:21:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1967524 00:10:28.338 10:21:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:28.338 10:21:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1967524 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1967524 ']' 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.338 10:21:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.338 [2024-12-09 10:21:12.932617] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:28.338 [2024-12-09 10:21:12.932819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967524 ] 00:10:28.597 [2024-12-09 10:21:13.095457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.597 [2024-12-09 10:21:13.215026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.532 10:21:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.532 10:21:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:29.532 10:21:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:30.468 10:21:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1967524 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1967524 ']' 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1967524 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967524 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967524' 00:10:30.468 killing process with pid 1967524 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 1967524 00:10:30.468 10:21:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 1967524 00:10:31.032 00:10:31.032 real 0m3.005s 00:10:31.032 user 0m3.580s 00:10:31.032 sys 0m0.933s 00:10:31.032 10:21:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.032 10:21:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.032 ************************************ 00:10:31.032 END TEST alias_rpc 00:10:31.032 ************************************ 00:10:31.032 10:21:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:31.032 10:21:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:31.032 10:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.032 10:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.032 10:21:15 -- common/autotest_common.sh@10 -- # set +x 00:10:31.032 ************************************ 00:10:31.032 START TEST spdkcli_tcp 00:10:31.032 ************************************ 00:10:31.032 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:31.032 * Looking for test storage... 00:10:31.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:10:31.032 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:31.033 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:31.033 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.291 10:21:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.291 --rc genhtml_branch_coverage=1 00:10:31.291 --rc genhtml_function_coverage=1 00:10:31.291 --rc genhtml_legend=1 00:10:31.291 --rc geninfo_all_blocks=1 00:10:31.291 --rc geninfo_unexecuted_blocks=1 00:10:31.291 00:10:31.291 ' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.291 --rc genhtml_branch_coverage=1 00:10:31.291 --rc genhtml_function_coverage=1 00:10:31.291 --rc genhtml_legend=1 00:10:31.291 --rc geninfo_all_blocks=1 00:10:31.291 --rc geninfo_unexecuted_blocks=1 00:10:31.291 00:10:31.291 ' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.291 --rc genhtml_branch_coverage=1 00:10:31.291 --rc genhtml_function_coverage=1 00:10:31.291 --rc genhtml_legend=1 00:10:31.291 --rc geninfo_all_blocks=1 00:10:31.291 --rc geninfo_unexecuted_blocks=1 00:10:31.291 00:10:31.291 ' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:31.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.291 --rc genhtml_branch_coverage=1 00:10:31.291 --rc genhtml_function_coverage=1 00:10:31.291 --rc genhtml_legend=1 00:10:31.291 --rc geninfo_all_blocks=1 00:10:31.291 --rc geninfo_unexecuted_blocks=1 00:10:31.291 00:10:31.291 ' 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1967977 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:31.291 10:21:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1967977 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1967977 ']' 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.291 10:21:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.291 [2024-12-09 10:21:15.929704] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:31.291 [2024-12-09 10:21:15.929832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967977 ] 00:10:31.549 [2024-12-09 10:21:16.056004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:31.549 [2024-12-09 10:21:16.174776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.549 [2024-12-09 10:21:16.174792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.117 10:21:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.117 10:21:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:32.117 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1968107 00:10:32.117 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:32.117 10:21:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:32.684 [ 00:10:32.684 "bdev_malloc_delete", 00:10:32.684 "bdev_malloc_create", 00:10:32.684 "bdev_null_resize", 00:10:32.684 "bdev_null_delete", 00:10:32.684 "bdev_null_create", 00:10:32.684 "bdev_nvme_cuse_unregister", 00:10:32.684 "bdev_nvme_cuse_register", 00:10:32.684 "bdev_opal_new_user", 00:10:32.684 "bdev_opal_set_lock_state", 00:10:32.684 "bdev_opal_delete", 00:10:32.684 "bdev_opal_get_info", 00:10:32.684 "bdev_opal_create", 00:10:32.684 "bdev_nvme_opal_revert", 00:10:32.684 "bdev_nvme_opal_init", 00:10:32.684 "bdev_nvme_send_cmd", 00:10:32.684 "bdev_nvme_set_keys", 00:10:32.684 "bdev_nvme_get_path_iostat", 00:10:32.684 "bdev_nvme_get_mdns_discovery_info", 00:10:32.684 "bdev_nvme_stop_mdns_discovery", 00:10:32.684 "bdev_nvme_start_mdns_discovery", 00:10:32.684 "bdev_nvme_set_multipath_policy", 00:10:32.684 "bdev_nvme_set_preferred_path", 00:10:32.684 "bdev_nvme_get_io_paths", 00:10:32.684 "bdev_nvme_remove_error_injection", 00:10:32.684 "bdev_nvme_add_error_injection", 00:10:32.684 "bdev_nvme_get_discovery_info", 00:10:32.684 "bdev_nvme_stop_discovery", 00:10:32.684 "bdev_nvme_start_discovery", 00:10:32.684 "bdev_nvme_get_controller_health_info", 00:10:32.684 "bdev_nvme_disable_controller", 00:10:32.684 "bdev_nvme_enable_controller", 00:10:32.684 "bdev_nvme_reset_controller", 00:10:32.684 "bdev_nvme_get_transport_statistics", 00:10:32.684 "bdev_nvme_apply_firmware", 00:10:32.684 "bdev_nvme_detach_controller", 00:10:32.684 "bdev_nvme_get_controllers", 00:10:32.684 "bdev_nvme_attach_controller", 00:10:32.684 "bdev_nvme_set_hotplug", 00:10:32.684 "bdev_nvme_set_options", 00:10:32.684 "bdev_passthru_delete", 00:10:32.684 "bdev_passthru_create", 00:10:32.684 "bdev_lvol_set_parent_bdev", 00:10:32.685 "bdev_lvol_set_parent", 00:10:32.685 "bdev_lvol_check_shallow_copy", 00:10:32.685 "bdev_lvol_start_shallow_copy", 00:10:32.685 "bdev_lvol_grow_lvstore", 00:10:32.685 "bdev_lvol_get_lvols", 00:10:32.685 "bdev_lvol_get_lvstores", 00:10:32.685 "bdev_lvol_delete", 00:10:32.685 "bdev_lvol_set_read_only", 00:10:32.685 "bdev_lvol_resize", 00:10:32.685 "bdev_lvol_decouple_parent", 00:10:32.685 "bdev_lvol_inflate", 00:10:32.685 "bdev_lvol_rename", 00:10:32.685 "bdev_lvol_clone_bdev", 00:10:32.685 "bdev_lvol_clone", 00:10:32.685 "bdev_lvol_snapshot", 00:10:32.685 "bdev_lvol_create", 00:10:32.685 "bdev_lvol_delete_lvstore", 00:10:32.685 "bdev_lvol_rename_lvstore", 00:10:32.685 "bdev_lvol_create_lvstore", 00:10:32.685 "bdev_raid_set_options", 00:10:32.685 "bdev_raid_remove_base_bdev", 00:10:32.685 "bdev_raid_add_base_bdev", 00:10:32.685 "bdev_raid_delete", 00:10:32.685 "bdev_raid_create", 00:10:32.685 "bdev_raid_get_bdevs", 00:10:32.685 "bdev_error_inject_error", 00:10:32.685 "bdev_error_delete", 00:10:32.685 "bdev_error_create", 00:10:32.685 "bdev_split_delete", 00:10:32.685 "bdev_split_create", 00:10:32.685 "bdev_delay_delete", 00:10:32.685 "bdev_delay_create", 00:10:32.685 "bdev_delay_update_latency", 00:10:32.685 "bdev_zone_block_delete", 00:10:32.685 "bdev_zone_block_create", 00:10:32.685 "blobfs_create", 00:10:32.685 "blobfs_detect", 00:10:32.685 "blobfs_set_cache_size", 00:10:32.685 "bdev_aio_delete", 00:10:32.685 "bdev_aio_rescan", 00:10:32.685 "bdev_aio_create", 00:10:32.685 "bdev_ftl_set_property", 00:10:32.685 "bdev_ftl_get_properties", 00:10:32.685 "bdev_ftl_get_stats", 00:10:32.685 "bdev_ftl_unmap", 00:10:32.685 "bdev_ftl_unload", 00:10:32.685 "bdev_ftl_delete", 00:10:32.685 "bdev_ftl_load", 00:10:32.685 "bdev_ftl_create", 00:10:32.685 "bdev_virtio_attach_controller", 00:10:32.685 "bdev_virtio_scsi_get_devices", 00:10:32.685 "bdev_virtio_detach_controller", 00:10:32.685 "bdev_virtio_blk_set_hotplug", 00:10:32.685 "bdev_iscsi_delete", 00:10:32.685 "bdev_iscsi_create", 00:10:32.685 "bdev_iscsi_set_options", 00:10:32.685 "accel_error_inject_error", 00:10:32.685 "ioat_scan_accel_module", 00:10:32.685 "dsa_scan_accel_module", 00:10:32.685 "iaa_scan_accel_module", 00:10:32.685 "vfu_virtio_create_fs_endpoint", 00:10:32.685 "vfu_virtio_create_scsi_endpoint", 00:10:32.685 "vfu_virtio_scsi_remove_target", 00:10:32.685 "vfu_virtio_scsi_add_target", 00:10:32.685 "vfu_virtio_create_blk_endpoint", 00:10:32.685 "vfu_virtio_delete_endpoint", 00:10:32.685 "keyring_file_remove_key", 00:10:32.685 "keyring_file_add_key", 00:10:32.685 "keyring_linux_set_options", 00:10:32.685 "fsdev_aio_delete", 00:10:32.685 "fsdev_aio_create", 00:10:32.685 "iscsi_get_histogram", 00:10:32.685 "iscsi_enable_histogram", 00:10:32.685 "iscsi_set_options", 00:10:32.685 "iscsi_get_auth_groups", 00:10:32.685 "iscsi_auth_group_remove_secret", 00:10:32.685 "iscsi_auth_group_add_secret", 00:10:32.685 "iscsi_delete_auth_group", 00:10:32.685 "iscsi_create_auth_group", 00:10:32.685 "iscsi_set_discovery_auth", 00:10:32.685 "iscsi_get_options", 00:10:32.685 "iscsi_target_node_request_logout", 00:10:32.685 "iscsi_target_node_set_redirect", 00:10:32.685 "iscsi_target_node_set_auth", 00:10:32.685 "iscsi_target_node_add_lun", 00:10:32.685 "iscsi_get_stats", 00:10:32.685 "iscsi_get_connections", 00:10:32.685 "iscsi_portal_group_set_auth", 00:10:32.685 "iscsi_start_portal_group", 00:10:32.685 "iscsi_delete_portal_group", 00:10:32.685 "iscsi_create_portal_group", 00:10:32.685 "iscsi_get_portal_groups", 00:10:32.685 "iscsi_delete_target_node", 00:10:32.685 "iscsi_target_node_remove_pg_ig_maps", 00:10:32.685 "iscsi_target_node_add_pg_ig_maps", 00:10:32.685 "iscsi_create_target_node", 00:10:32.685 "iscsi_get_target_nodes", 00:10:32.685 "iscsi_delete_initiator_group", 00:10:32.685 "iscsi_initiator_group_remove_initiators", 00:10:32.685 "iscsi_initiator_group_add_initiators", 00:10:32.685 "iscsi_create_initiator_group", 00:10:32.685 "iscsi_get_initiator_groups", 00:10:32.685 "nvmf_set_crdt", 00:10:32.685 "nvmf_set_config", 00:10:32.685 "nvmf_set_max_subsystems", 00:10:32.685 "nvmf_stop_mdns_prr", 00:10:32.685 "nvmf_publish_mdns_prr", 00:10:32.685 "nvmf_subsystem_get_listeners", 00:10:32.685 "nvmf_subsystem_get_qpairs", 00:10:32.685 "nvmf_subsystem_get_controllers", 00:10:32.685 "nvmf_get_stats", 00:10:32.685 "nvmf_get_transports", 00:10:32.685 "nvmf_create_transport", 00:10:32.685 "nvmf_get_targets", 00:10:32.685 "nvmf_delete_target", 00:10:32.685 "nvmf_create_target", 00:10:32.685 "nvmf_subsystem_allow_any_host", 00:10:32.685 "nvmf_subsystem_set_keys", 00:10:32.685 "nvmf_subsystem_remove_host", 00:10:32.685 "nvmf_subsystem_add_host", 00:10:32.685 "nvmf_ns_remove_host", 00:10:32.685 "nvmf_ns_add_host", 00:10:32.685 "nvmf_subsystem_remove_ns", 00:10:32.685 "nvmf_subsystem_set_ns_ana_group", 00:10:32.685 "nvmf_subsystem_add_ns", 00:10:32.685 "nvmf_subsystem_listener_set_ana_state", 00:10:32.685 "nvmf_discovery_get_referrals", 00:10:32.685 "nvmf_discovery_remove_referral", 00:10:32.685 "nvmf_discovery_add_referral", 00:10:32.685 "nvmf_subsystem_remove_listener", 00:10:32.685 "nvmf_subsystem_add_listener", 00:10:32.685 "nvmf_delete_subsystem", 00:10:32.685 "nvmf_create_subsystem", 00:10:32.685 "nvmf_get_subsystems", 00:10:32.685 "env_dpdk_get_mem_stats", 00:10:32.685 "nbd_get_disks", 00:10:32.685 "nbd_stop_disk", 00:10:32.685 "nbd_start_disk", 00:10:32.685 "ublk_recover_disk", 00:10:32.685 "ublk_get_disks", 00:10:32.685 "ublk_stop_disk", 00:10:32.685 "ublk_start_disk", 00:10:32.685 "ublk_destroy_target", 00:10:32.685 "ublk_create_target", 00:10:32.685 "virtio_blk_create_transport", 00:10:32.685 "virtio_blk_get_transports", 00:10:32.685 "vhost_controller_set_coalescing", 00:10:32.685 "vhost_get_controllers", 00:10:32.685 "vhost_delete_controller", 00:10:32.685 "vhost_create_blk_controller", 00:10:32.685 "vhost_scsi_controller_remove_target", 00:10:32.685 "vhost_scsi_controller_add_target", 00:10:32.685 "vhost_start_scsi_controller", 00:10:32.685 "vhost_create_scsi_controller", 00:10:32.685 "thread_set_cpumask", 00:10:32.685 "scheduler_set_options", 00:10:32.685 "framework_get_governor", 00:10:32.685 "framework_get_scheduler", 00:10:32.685 "framework_set_scheduler", 00:10:32.685 "framework_get_reactors", 00:10:32.685 "thread_get_io_channels", 00:10:32.685 "thread_get_pollers", 00:10:32.685 "thread_get_stats", 00:10:32.685 "framework_monitor_context_switch", 00:10:32.685 "spdk_kill_instance", 00:10:32.686 "log_enable_timestamps", 00:10:32.686 "log_get_flags", 00:10:32.686 "log_clear_flag", 00:10:32.686 "log_set_flag", 00:10:32.686 "log_get_level", 00:10:32.686 "log_set_level", 00:10:32.686 "log_get_print_level", 00:10:32.686 "log_set_print_level", 00:10:32.686 "framework_enable_cpumask_locks", 00:10:32.686 "framework_disable_cpumask_locks", 00:10:32.686 "framework_wait_init", 00:10:32.686 "framework_start_init", 00:10:32.686 "scsi_get_devices", 00:10:32.686 "bdev_get_histogram", 00:10:32.686 "bdev_enable_histogram", 00:10:32.686 "bdev_set_qos_limit", 00:10:32.686 "bdev_set_qd_sampling_period", 00:10:32.686 "bdev_get_bdevs", 00:10:32.686 "bdev_reset_iostat", 00:10:32.686 "bdev_get_iostat", 00:10:32.686 "bdev_examine", 00:10:32.686 "bdev_wait_for_examine", 00:10:32.686 "bdev_set_options", 00:10:32.686 "accel_get_stats", 00:10:32.686 "accel_set_options", 00:10:32.686 "accel_set_driver", 00:10:32.686 "accel_crypto_key_destroy", 00:10:32.686 "accel_crypto_keys_get", 00:10:32.686 "accel_crypto_key_create", 00:10:32.686 "accel_assign_opc", 00:10:32.686 "accel_get_module_info", 00:10:32.686 "accel_get_opc_assignments", 00:10:32.686 "vmd_rescan", 00:10:32.686 "vmd_remove_device", 00:10:32.686 "vmd_enable", 00:10:32.686 "sock_get_default_impl", 00:10:32.686 "sock_set_default_impl", 00:10:32.686 "sock_impl_set_options", 00:10:32.686 "sock_impl_get_options", 00:10:32.686 "iobuf_get_stats", 00:10:32.686 "iobuf_set_options", 00:10:32.686 "keyring_get_keys", 00:10:32.686 "vfu_tgt_set_base_path", 00:10:32.686 "framework_get_pci_devices", 00:10:32.686 "framework_get_config", 00:10:32.686 "framework_get_subsystems", 00:10:32.686 "fsdev_set_opts", 00:10:32.686 "fsdev_get_opts", 00:10:32.686 "trace_get_info", 00:10:32.686 "trace_get_tpoint_group_mask", 00:10:32.686 "trace_disable_tpoint_group", 00:10:32.686 "trace_enable_tpoint_group", 00:10:32.686 "trace_clear_tpoint_mask", 00:10:32.686 "trace_set_tpoint_mask", 00:10:32.686 "notify_get_notifications", 00:10:32.686 "notify_get_types", 00:10:32.686 "spdk_get_version", 00:10:32.686 "rpc_get_methods" 00:10:32.686 ] 00:10:32.686 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.686 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:32.686 10:21:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1967977 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1967977 ']' 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1967977 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967977 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967977' 00:10:32.686 killing process with pid 1967977 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1967977 00:10:32.686 10:21:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1967977 00:10:33.651 00:10:33.651 real 0m2.365s 00:10:33.651 user 0m4.353s 00:10:33.651 sys 0m0.789s 00:10:33.651 10:21:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.651 10:21:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.651 ************************************ 00:10:33.651 END TEST spdkcli_tcp 00:10:33.651 ************************************ 00:10:33.651 10:21:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:33.651 10:21:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:33.651 10:21:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.651 10:21:17 -- common/autotest_common.sh@10 -- # set +x 00:10:33.651 ************************************ 00:10:33.651 START TEST dpdk_mem_utility 00:10:33.651 ************************************ 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:33.651 * Looking for test storage... 00:10:33.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.651 10:21:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.651 --rc genhtml_branch_coverage=1 00:10:33.651 --rc genhtml_function_coverage=1 00:10:33.651 --rc genhtml_legend=1 00:10:33.651 --rc geninfo_all_blocks=1 00:10:33.651 --rc geninfo_unexecuted_blocks=1 00:10:33.651 00:10:33.651 ' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.651 --rc genhtml_branch_coverage=1 00:10:33.651 --rc genhtml_function_coverage=1 00:10:33.651 --rc genhtml_legend=1 00:10:33.651 --rc geninfo_all_blocks=1 00:10:33.651 --rc geninfo_unexecuted_blocks=1 00:10:33.651 00:10:33.651 ' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.651 --rc genhtml_branch_coverage=1 00:10:33.651 --rc genhtml_function_coverage=1 00:10:33.651 --rc genhtml_legend=1 00:10:33.651 --rc geninfo_all_blocks=1 00:10:33.651 --rc geninfo_unexecuted_blocks=1 00:10:33.651 00:10:33.651 ' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:33.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.651 --rc genhtml_branch_coverage=1 00:10:33.651 --rc genhtml_function_coverage=1 00:10:33.651 --rc genhtml_legend=1 00:10:33.651 --rc geninfo_all_blocks=1 00:10:33.651 --rc geninfo_unexecuted_blocks=1 00:10:33.651 00:10:33.651 ' 00:10:33.651 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:33.651 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1968319 00:10:33.651 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:33.651 10:21:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1968319 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1968319 ']' 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.651 10:21:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:33.910 [2024-12-09 10:21:18.340761] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:33.910 [2024-12-09 10:21:18.340865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968319 ] 00:10:33.910 [2024-12-09 10:21:18.468575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.167 [2024-12-09 10:21:18.591744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.736 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.736 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:34.736 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:34.736 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:34.736 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.736 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:34.736 { 00:10:34.736 "filename": "/tmp/spdk_mem_dump.txt" 00:10:34.736 } 00:10:34.736 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.736 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:34.736 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:34.736 1 heaps totaling size 818.000000 MiB 00:10:34.736 size: 818.000000 MiB heap id: 0 00:10:34.736 end heaps---------- 00:10:34.736 9 mempools totaling size 603.782043 MiB 00:10:34.736 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:34.736 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:34.736 size: 100.555481 MiB name: bdev_io_1968319 00:10:34.736 size: 50.003479 MiB name: msgpool_1968319 00:10:34.736 size: 36.509338 MiB name: fsdev_io_1968319 00:10:34.736 size: 21.763794 MiB name: PDU_Pool 00:10:34.736 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:34.736 size: 4.133484 MiB name: evtpool_1968319 00:10:34.736 size: 0.026123 MiB name: Session_Pool 00:10:34.736 end mempools------- 00:10:34.736 6 memzones totaling size 4.142822 MiB 00:10:34.736 size: 1.000366 MiB name: RG_ring_0_1968319 00:10:34.736 size: 1.000366 MiB name: RG_ring_1_1968319 00:10:34.736 size: 1.000366 MiB name: RG_ring_4_1968319 00:10:34.736 size: 1.000366 MiB name: RG_ring_5_1968319 00:10:34.736 size: 0.125366 MiB name: RG_ring_2_1968319 00:10:34.736 size: 0.015991 MiB name: RG_ring_3_1968319 00:10:34.736 end memzones------- 00:10:34.736 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:34.736 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:34.736 list of free elements. size: 10.852478 MiB 00:10:34.736 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:34.736 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:34.736 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:34.736 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:34.736 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:34.736 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:34.736 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:34.736 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:34.736 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:34.736 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:34.736 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:34.736 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:34.736 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:34.736 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:34.736 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:34.736 list of standard malloc elements. size: 199.218628 MiB 00:10:34.736 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:34.736 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:34.736 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:34.736 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:34.736 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:34.736 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:34.736 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:34.736 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:34.736 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:34.736 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:34.736 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:34.736 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:34.736 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:34.737 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:34.737 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:34.737 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:34.737 list of memzone associated elements. size: 607.928894 MiB 00:10:34.737 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:34.737 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:34.737 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:34.737 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:34.737 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:34.737 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1968319_0 00:10:34.737 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:34.737 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1968319_0 00:10:34.737 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:34.737 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1968319_0 00:10:34.737 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:34.737 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:34.737 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:34.737 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:34.737 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:34.737 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1968319_0 00:10:34.737 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:34.737 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1968319 00:10:34.737 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:34.737 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1968319 00:10:34.737 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:34.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:34.737 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:34.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:34.737 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:34.737 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:34.737 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:34.737 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:34.737 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:34.737 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1968319 00:10:34.737 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:34.737 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1968319 00:10:34.737 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:34.737 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1968319 00:10:34.737 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:34.737 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1968319 00:10:34.737 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:34.737 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1968319 00:10:34.737 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:34.737 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1968319 00:10:34.737 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:34.737 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:34.737 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:34.737 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:34.737 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:34.737 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:34.737 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:34.737 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1968319 00:10:34.737 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:34.737 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1968319 00:10:34.737 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:34.737 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:34.737 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:34.737 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:34.737 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:34.737 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1968319 00:10:34.737 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:34.737 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:34.737 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:34.737 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1968319 00:10:34.737 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:34.737 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1968319 00:10:34.737 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:34.737 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1968319 00:10:34.737 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:34.737 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:34.737 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:34.737 10:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1968319 00:10:34.737 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1968319 ']' 00:10:34.737 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1968319 00:10:34.737 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:34.737 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.737 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968319 00:10:35.020 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.020 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.020 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968319' 00:10:35.021 killing process with pid 1968319 00:10:35.021 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1968319 00:10:35.021 10:21:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1968319 00:10:35.586 00:10:35.586 real 0m1.994s 00:10:35.586 user 0m2.150s 00:10:35.586 sys 0m0.767s 00:10:35.586 10:21:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.586 10:21:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:35.586 ************************************ 00:10:35.587 END TEST dpdk_mem_utility 00:10:35.587 ************************************ 00:10:35.587 10:21:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:35.587 10:21:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.587 10:21:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.587 10:21:20 -- common/autotest_common.sh@10 -- # set +x 00:10:35.587 ************************************ 00:10:35.587 START TEST event 00:10:35.587 ************************************ 00:10:35.587 10:21:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:35.587 * Looking for test storage... 00:10:35.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:35.587 10:21:20 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.587 10:21:20 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.587 10:21:20 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.845 10:21:20 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.845 10:21:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.845 10:21:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.845 10:21:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.845 10:21:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.845 10:21:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.845 10:21:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.845 10:21:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.845 10:21:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.845 10:21:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.845 10:21:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.845 10:21:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.845 10:21:20 event -- scripts/common.sh@344 -- # case "$op" in 00:10:35.845 10:21:20 event -- scripts/common.sh@345 -- # : 1 00:10:35.845 10:21:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.845 10:21:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.845 10:21:20 event -- scripts/common.sh@365 -- # decimal 1 00:10:35.845 10:21:20 event -- scripts/common.sh@353 -- # local d=1 00:10:35.845 10:21:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.845 10:21:20 event -- scripts/common.sh@355 -- # echo 1 00:10:35.845 10:21:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.845 10:21:20 event -- scripts/common.sh@366 -- # decimal 2 00:10:35.845 10:21:20 event -- scripts/common.sh@353 -- # local d=2 00:10:35.845 10:21:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.845 10:21:20 event -- scripts/common.sh@355 -- # echo 2 00:10:35.845 10:21:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.845 10:21:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.845 10:21:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.845 10:21:20 event -- scripts/common.sh@368 -- # return 0 00:10:35.845 10:21:20 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.846 --rc genhtml_branch_coverage=1 00:10:35.846 --rc genhtml_function_coverage=1 00:10:35.846 --rc genhtml_legend=1 00:10:35.846 --rc geninfo_all_blocks=1 00:10:35.846 --rc geninfo_unexecuted_blocks=1 00:10:35.846 00:10:35.846 ' 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.846 --rc genhtml_branch_coverage=1 00:10:35.846 --rc genhtml_function_coverage=1 00:10:35.846 --rc genhtml_legend=1 00:10:35.846 --rc geninfo_all_blocks=1 00:10:35.846 --rc geninfo_unexecuted_blocks=1 00:10:35.846 00:10:35.846 ' 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.846 --rc genhtml_branch_coverage=1 00:10:35.846 --rc genhtml_function_coverage=1 00:10:35.846 --rc genhtml_legend=1 00:10:35.846 --rc geninfo_all_blocks=1 00:10:35.846 --rc geninfo_unexecuted_blocks=1 00:10:35.846 00:10:35.846 ' 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.846 --rc genhtml_branch_coverage=1 00:10:35.846 --rc genhtml_function_coverage=1 00:10:35.846 --rc genhtml_legend=1 00:10:35.846 --rc geninfo_all_blocks=1 00:10:35.846 --rc geninfo_unexecuted_blocks=1 00:10:35.846 00:10:35.846 ' 00:10:35.846 10:21:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:35.846 10:21:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:35.846 10:21:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:35.846 10:21:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.846 10:21:20 event -- common/autotest_common.sh@10 -- # set +x 00:10:35.846 ************************************ 00:10:35.846 START TEST event_perf 00:10:35.846 ************************************ 00:10:35.846 10:21:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:35.846 Running I/O for 1 seconds...[2024-12-09 10:21:20.328606] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:35.846 [2024-12-09 10:21:20.328689] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968641 ] 00:10:35.846 [2024-12-09 10:21:20.464020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.104 [2024-12-09 10:21:20.592200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.104 [2024-12-09 10:21:20.592298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.104 [2024-12-09 10:21:20.592397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.104 [2024-12-09 10:21:20.592401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.038 Running I/O for 1 seconds... 00:10:37.038 lcore 0: 214299 00:10:37.038 lcore 1: 214298 00:10:37.038 lcore 2: 214297 00:10:37.038 lcore 3: 214298 00:10:37.296 done. 00:10:37.296 00:10:37.296 real 0m1.393s 00:10:37.296 user 0m4.251s 00:10:37.296 sys 0m0.133s 00:10:37.296 10:21:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.296 10:21:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:37.296 ************************************ 00:10:37.296 END TEST event_perf 00:10:37.296 ************************************ 00:10:37.296 10:21:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:37.296 10:21:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.296 10:21:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.296 10:21:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.296 ************************************ 00:10:37.296 START TEST event_reactor 00:10:37.296 ************************************ 00:10:37.296 10:21:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:37.296 [2024-12-09 10:21:21.810869] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:37.296 [2024-12-09 10:21:21.811011] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968809 ] 00:10:37.553 [2024-12-09 10:21:21.978717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.553 [2024-12-09 10:21:22.097455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.926 test_start 00:10:38.926 oneshot 00:10:38.926 tick 100 00:10:38.926 tick 100 00:10:38.926 tick 250 00:10:38.926 tick 100 00:10:38.926 tick 100 00:10:38.926 tick 100 00:10:38.926 tick 250 00:10:38.926 tick 500 00:10:38.926 tick 100 00:10:38.926 tick 100 00:10:38.926 tick 250 00:10:38.926 tick 100 00:10:38.926 tick 100 00:10:38.926 test_end 00:10:38.926 00:10:38.926 real 0m1.428s 00:10:38.926 user 0m1.279s 00:10:38.926 sys 0m0.138s 00:10:38.926 10:21:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.926 10:21:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:38.926 ************************************ 00:10:38.926 END TEST event_reactor 00:10:38.926 ************************************ 00:10:38.926 10:21:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:38.926 10:21:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.926 10:21:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.926 10:21:23 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.926 ************************************ 00:10:38.926 START TEST event_reactor_perf 00:10:38.926 ************************************ 00:10:38.926 10:21:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:38.926 [2024-12-09 10:21:23.320383] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:38.926 [2024-12-09 10:21:23.320528] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968962 ] 00:10:38.926 [2024-12-09 10:21:23.488486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.184 [2024-12-09 10:21:23.608131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.121 test_start 00:10:40.121 test_end 00:10:40.121 Performance: 162000 events per second 00:10:40.121 00:10:40.121 real 0m1.432s 00:10:40.121 user 0m1.275s 00:10:40.121 sys 0m0.146s 00:10:40.121 10:21:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.121 10:21:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:40.121 ************************************ 00:10:40.121 END TEST event_reactor_perf 00:10:40.121 ************************************ 00:10:40.121 10:21:24 event -- event/event.sh@49 -- # uname -s 00:10:40.121 10:21:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:40.121 10:21:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:40.121 10:21:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.121 10:21:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.121 10:21:24 event -- common/autotest_common.sh@10 -- # set +x 00:10:40.380 ************************************ 00:10:40.380 START TEST event_scheduler 00:10:40.380 ************************************ 00:10:40.380 10:21:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:40.380 * Looking for test storage... 00:10:40.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:10:40.380 10:21:24 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.380 10:21:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.380 10:21:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.380 10:21:24 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.380 10:21:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.380 10:21:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:40.380 10:21:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.380 10:21:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.380 --rc genhtml_branch_coverage=1 00:10:40.380 --rc genhtml_function_coverage=1 00:10:40.380 --rc genhtml_legend=1 00:10:40.380 --rc geninfo_all_blocks=1 00:10:40.380 --rc geninfo_unexecuted_blocks=1 00:10:40.380 00:10:40.380 ' 00:10:40.380 10:21:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.380 --rc genhtml_branch_coverage=1 00:10:40.380 --rc genhtml_function_coverage=1 00:10:40.380 --rc genhtml_legend=1 00:10:40.380 --rc geninfo_all_blocks=1 00:10:40.380 --rc geninfo_unexecuted_blocks=1 00:10:40.380 00:10:40.380 ' 00:10:40.380 10:21:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.380 --rc genhtml_branch_coverage=1 00:10:40.380 --rc genhtml_function_coverage=1 00:10:40.380 --rc genhtml_legend=1 00:10:40.380 --rc geninfo_all_blocks=1 00:10:40.380 --rc geninfo_unexecuted_blocks=1 00:10:40.380 00:10:40.380 ' 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.381 --rc genhtml_branch_coverage=1 00:10:40.381 --rc genhtml_function_coverage=1 00:10:40.381 --rc genhtml_legend=1 00:10:40.381 --rc geninfo_all_blocks=1 00:10:40.381 --rc geninfo_unexecuted_blocks=1 00:10:40.381 00:10:40.381 ' 00:10:40.381 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:40.381 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1969275 00:10:40.381 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:40.381 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:40.381 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1969275 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1969275 ']' 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.381 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:40.646 [2024-12-09 10:21:25.052710] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:40.646 [2024-12-09 10:21:25.052801] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969275 ] 00:10:40.646 [2024-12-09 10:21:25.177859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.908 [2024-12-09 10:21:25.309920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.908 [2024-12-09 10:21:25.310022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.908 [2024-12-09 10:21:25.310116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.908 [2024-12-09 10:21:25.310120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:40.908 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:40.908 [2024-12-09 10:21:25.499368] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:40.908 [2024-12-09 10:21:25.499399] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:40.908 [2024-12-09 10:21:25.499420] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:40.908 [2024-12-09 10:21:25.499434] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:40.908 [2024-12-09 10:21:25.499446] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.908 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.908 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 [2024-12-09 10:21:25.688142] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:41.168 10:21:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:41.168 10:21:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.168 10:21:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 ************************************ 00:10:41.168 START TEST scheduler_create_thread 00:10:41.168 ************************************ 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 2 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 3 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 4 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 5 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 6 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 7 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 8 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 9 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 10 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:41.168 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.428 10:21:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.397 10:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.397 00:10:42.397 real 0m1.177s 00:10:42.397 user 0m0.010s 00:10:42.397 sys 0m0.007s 00:10:42.397 10:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.397 10:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.397 ************************************ 00:10:42.397 END TEST scheduler_create_thread 00:10:42.397 ************************************ 00:10:42.397 10:21:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:42.397 10:21:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1969275 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1969275 ']' 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1969275 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969275 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969275' 00:10:42.397 killing process with pid 1969275 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1969275 00:10:42.397 10:21:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1969275 00:10:42.965 [2024-12-09 10:21:27.391533] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:43.225 00:10:43.225 real 0m2.891s 00:10:43.225 user 0m3.757s 00:10:43.225 sys 0m0.548s 00:10:43.225 10:21:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.225 10:21:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.225 ************************************ 00:10:43.225 END TEST event_scheduler 00:10:43.225 ************************************ 00:10:43.225 10:21:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:43.225 10:21:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:43.225 10:21:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.225 10:21:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.225 10:21:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:43.225 ************************************ 00:10:43.225 START TEST app_repeat 00:10:43.225 ************************************ 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1969598 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1969598' 00:10:43.225 Process app_repeat pid: 1969598 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:43.225 spdk_app_start Round 0 00:10:43.225 10:21:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969598 /var/tmp/spdk-nbd.sock 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969598 ']' 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:43.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.225 10:21:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:43.225 [2024-12-09 10:21:27.811645] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:10:43.225 [2024-12-09 10:21:27.811767] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969598 ] 00:10:43.484 [2024-12-09 10:21:27.943384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.484 [2024-12-09 10:21:28.071777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.484 [2024-12-09 10:21:28.071803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.052 10:21:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.052 10:21:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:44.052 10:21:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.621 Malloc0 00:10:44.621 10:21:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:45.189 Malloc1 00:10:45.189 10:21:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.189 10:21:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:46.124 /dev/nbd0 00:10:46.124 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:46.124 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:46.124 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:46.125 1+0 records in 00:10:46.125 1+0 records out 00:10:46.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0205448 s, 199 kB/s 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:46.125 10:21:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:46.125 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.125 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:46.125 10:21:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:46.383 /dev/nbd1 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:46.383 1+0 records in 00:10:46.383 1+0 records out 00:10:46.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274314 s, 14.9 MB/s 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:46.383 10:21:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.383 10:21:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:46.949 10:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:46.949 { 00:10:46.949 "nbd_device": "/dev/nbd0", 00:10:46.949 "bdev_name": "Malloc0" 00:10:46.949 }, 00:10:46.949 { 00:10:46.949 "nbd_device": "/dev/nbd1", 00:10:46.949 "bdev_name": "Malloc1" 00:10:46.949 } 00:10:46.949 ]' 00:10:46.949 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:46.949 { 00:10:46.949 "nbd_device": "/dev/nbd0", 00:10:46.949 "bdev_name": "Malloc0" 00:10:46.949 }, 00:10:46.949 { 00:10:46.949 "nbd_device": "/dev/nbd1", 00:10:46.949 "bdev_name": "Malloc1" 00:10:46.949 } 00:10:46.949 ]' 00:10:46.949 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:47.207 /dev/nbd1' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:47.207 /dev/nbd1' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:47.207 256+0 records in 00:10:47.207 256+0 records out 00:10:47.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00867455 s, 121 MB/s 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:47.207 256+0 records in 00:10:47.207 256+0 records out 00:10:47.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309685 s, 33.9 MB/s 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:47.207 256+0 records in 00:10:47.207 256+0 records out 00:10:47.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341484 s, 30.7 MB/s 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.207 10:21:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:47.465 10:21:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.031 10:21:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:48.597 10:21:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:48.597 10:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:48.597 10:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:48.855 10:21:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:48.855 10:21:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:49.138 10:21:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:49.705 [2024-12-09 10:21:34.124895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.705 [2024-12-09 10:21:34.242193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.705 [2024-12-09 10:21:34.242194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.705 [2024-12-09 10:21:34.347119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:49.705 [2024-12-09 10:21:34.347258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:52.238 10:21:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:52.238 10:21:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:52.238 spdk_app_start Round 1 00:10:52.238 10:21:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969598 /var/tmp/spdk-nbd.sock 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969598 ']' 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.238 10:21:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:52.497 10:21:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.497 10:21:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:52.497 10:21:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:53.066 Malloc0 00:10:53.066 10:21:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:53.636 Malloc1 00:10:53.636 10:21:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:53.636 10:21:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:54.206 /dev/nbd0 00:10:54.206 10:21:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:54.206 10:21:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:54.206 1+0 records in 00:10:54.206 1+0 records out 00:10:54.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351936 s, 11.6 MB/s 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.206 10:21:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:54.206 10:21:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.206 10:21:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.206 10:21:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:54.773 /dev/nbd1 00:10:54.773 10:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:54.773 10:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:54.773 10:21:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:54.774 1+0 records in 00:10:54.774 1+0 records out 00:10:54.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329496 s, 12.4 MB/s 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:54.774 10:21:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:54.774 10:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:54.774 10:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:54.774 10:21:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:54.774 10:21:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.774 10:21:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:55.342 { 00:10:55.342 "nbd_device": "/dev/nbd0", 00:10:55.342 "bdev_name": "Malloc0" 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "nbd_device": "/dev/nbd1", 00:10:55.342 "bdev_name": "Malloc1" 00:10:55.342 } 00:10:55.342 ]' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:55.342 { 00:10:55.342 "nbd_device": "/dev/nbd0", 00:10:55.342 "bdev_name": "Malloc0" 00:10:55.342 }, 00:10:55.342 { 00:10:55.342 "nbd_device": "/dev/nbd1", 00:10:55.342 "bdev_name": "Malloc1" 00:10:55.342 } 00:10:55.342 ]' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:55.342 /dev/nbd1' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:55.342 /dev/nbd1' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:55.342 256+0 records in 00:10:55.342 256+0 records out 00:10:55.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060587 s, 173 MB/s 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:55.342 256+0 records in 00:10:55.342 256+0 records out 00:10:55.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308021 s, 34.0 MB/s 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:55.342 256+0 records in 00:10:55.342 256+0 records out 00:10:55.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0399383 s, 26.3 MB/s 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.342 10:21:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:55.600 10:21:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.600 10:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.859 10:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.429 10:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:56.995 10:21:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:56.995 10:21:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:57.254 10:21:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:57.822 [2024-12-09 10:21:42.228062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:57.822 [2024-12-09 10:21:42.345355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.822 [2024-12-09 10:21:42.345369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.822 [2024-12-09 10:21:42.450953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:57.823 [2024-12-09 10:21:42.451088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:00.359 10:21:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:00.360 10:21:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:00.360 spdk_app_start Round 2 00:11:00.360 10:21:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969598 /var/tmp/spdk-nbd.sock 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969598 ']' 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:00.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.360 10:21:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:00.926 10:21:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.926 10:21:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:00.926 10:21:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:01.185 Malloc0 00:11:01.185 10:21:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:01.777 Malloc1 00:11:01.777 10:21:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:01.777 10:21:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:02.345 /dev/nbd0 00:11:02.345 10:21:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.345 10:21:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:02.345 1+0 records in 00:11:02.345 1+0 records out 00:11:02.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023281 s, 17.6 MB/s 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:02.345 10:21:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:02.345 10:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.345 10:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.345 10:21:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:02.912 /dev/nbd1 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:02.912 1+0 records in 00:11:02.912 1+0 records out 00:11:02.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330785 s, 12.4 MB/s 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:02.912 10:21:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.912 10:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:03.170 10:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:03.171 { 00:11:03.171 "nbd_device": "/dev/nbd0", 00:11:03.171 "bdev_name": "Malloc0" 00:11:03.171 }, 00:11:03.171 { 00:11:03.171 "nbd_device": "/dev/nbd1", 00:11:03.171 "bdev_name": "Malloc1" 00:11:03.171 } 00:11:03.171 ]' 00:11:03.171 10:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:03.171 { 00:11:03.171 "nbd_device": "/dev/nbd0", 00:11:03.171 "bdev_name": "Malloc0" 00:11:03.171 }, 00:11:03.171 { 00:11:03.171 "nbd_device": "/dev/nbd1", 00:11:03.171 "bdev_name": "Malloc1" 00:11:03.171 } 00:11:03.171 ]' 00:11:03.171 10:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:03.429 /dev/nbd1' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:03.429 /dev/nbd1' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:03.429 256+0 records in 00:11:03.429 256+0 records out 00:11:03.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566145 s, 185 MB/s 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:03.429 256+0 records in 00:11:03.429 256+0 records out 00:11:03.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0418629 s, 25.0 MB/s 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:03.429 256+0 records in 00:11:03.429 256+0 records out 00:11:03.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270137 s, 38.8 MB/s 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:03.429 10:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.430 10:21:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.997 10:21:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.563 10:21:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:05.130 10:21:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:05.130 10:21:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:05.698 10:21:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:05.958 [2024-12-09 10:21:50.375067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:05.958 [2024-12-09 10:21:50.489629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.958 [2024-12-09 10:21:50.489643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.958 [2024-12-09 10:21:50.592812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:05.958 [2024-12-09 10:21:50.592951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:08.497 10:21:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1969598 /var/tmp/spdk-nbd.sock 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969598 ']' 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:08.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.497 10:21:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:09.064 10:21:53 event.app_repeat -- event/event.sh@39 -- # killprocess 1969598 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1969598 ']' 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1969598 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969598 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969598' 00:11:09.064 killing process with pid 1969598 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1969598 00:11:09.064 10:21:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1969598 00:11:09.324 spdk_app_start is called in Round 0. 00:11:09.324 Shutdown signal received, stop current app iteration 00:11:09.324 Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 reinitialization... 00:11:09.324 spdk_app_start is called in Round 1. 00:11:09.324 Shutdown signal received, stop current app iteration 00:11:09.324 Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 reinitialization... 00:11:09.324 spdk_app_start is called in Round 2. 00:11:09.324 Shutdown signal received, stop current app iteration 00:11:09.324 Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 reinitialization... 00:11:09.324 spdk_app_start is called in Round 3. 00:11:09.324 Shutdown signal received, stop current app iteration 00:11:09.324 10:21:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:09.324 10:21:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:09.324 00:11:09.324 real 0m26.058s 00:11:09.324 user 0m59.958s 00:11:09.324 sys 0m5.491s 00:11:09.324 10:21:53 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.324 10:21:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:09.324 ************************************ 00:11:09.324 END TEST app_repeat 00:11:09.324 ************************************ 00:11:09.324 10:21:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:09.324 10:21:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:09.324 10:21:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:09.324 10:21:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.324 10:21:53 event -- common/autotest_common.sh@10 -- # set +x 00:11:09.324 ************************************ 00:11:09.324 START TEST cpu_locks 00:11:09.324 ************************************ 00:11:09.324 10:21:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:11:09.583 * Looking for test storage... 00:11:09.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.583 10:21:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.583 --rc genhtml_branch_coverage=1 00:11:09.583 --rc genhtml_function_coverage=1 00:11:09.583 --rc genhtml_legend=1 00:11:09.583 --rc geninfo_all_blocks=1 00:11:09.583 --rc geninfo_unexecuted_blocks=1 00:11:09.583 00:11:09.583 ' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.583 --rc genhtml_branch_coverage=1 00:11:09.583 --rc genhtml_function_coverage=1 00:11:09.583 --rc genhtml_legend=1 00:11:09.583 --rc geninfo_all_blocks=1 00:11:09.583 --rc geninfo_unexecuted_blocks=1 00:11:09.583 00:11:09.583 ' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.583 --rc genhtml_branch_coverage=1 00:11:09.583 --rc genhtml_function_coverage=1 00:11:09.583 --rc genhtml_legend=1 00:11:09.583 --rc geninfo_all_blocks=1 00:11:09.583 --rc geninfo_unexecuted_blocks=1 00:11:09.583 00:11:09.583 ' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.583 --rc genhtml_branch_coverage=1 00:11:09.583 --rc genhtml_function_coverage=1 00:11:09.583 --rc genhtml_legend=1 00:11:09.583 --rc geninfo_all_blocks=1 00:11:09.583 --rc geninfo_unexecuted_blocks=1 00:11:09.583 00:11:09.583 ' 00:11:09.583 10:21:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:09.583 10:21:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:09.583 10:21:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:09.583 10:21:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.583 10:21:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.842 ************************************ 00:11:09.842 START TEST default_locks 00:11:09.842 ************************************ 00:11:09.842 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:09.842 10:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1972785 00:11:09.842 10:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1972785 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1972785 ']' 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.843 10:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:09.843 [2024-12-09 10:21:54.310326] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:09.843 [2024-12-09 10:21:54.310430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972785 ] 00:11:09.843 [2024-12-09 10:21:54.444140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.102 [2024-12-09 10:21:54.566634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.672 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.672 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:10.672 10:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1972785 00:11:10.672 10:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1972785 00:11:10.672 10:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:10.933 lslocks: write error 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1972785 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1972785 ']' 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1972785 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972785 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972785' 00:11:10.933 killing process with pid 1972785 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1972785 00:11:10.933 10:21:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1972785 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1972785 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1972785 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1972785 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1972785 ']' 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1972785) - No such process 00:11:11.869 ERROR: process (pid: 1972785) is no longer running 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:11.869 00:11:11.869 real 0m2.035s 00:11:11.869 user 0m2.088s 00:11:11.869 sys 0m0.915s 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.869 10:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.869 ************************************ 00:11:11.869 END TEST default_locks 00:11:11.869 ************************************ 00:11:11.869 10:21:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:11.869 10:21:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.869 10:21:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.869 10:21:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.869 ************************************ 00:11:11.869 START TEST default_locks_via_rpc 00:11:11.869 ************************************ 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1973043 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1973043 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1973043 ']' 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.869 10:21:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.869 [2024-12-09 10:21:56.414168] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:11.869 [2024-12-09 10:21:56.414267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973043 ] 00:11:12.128 [2024-12-09 10:21:56.545158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.128 [2024-12-09 10:21:56.669101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:12.695 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1973043 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1973043 00:11:12.696 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1973043 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1973043 ']' 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1973043 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.263 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973043 00:11:13.521 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.521 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.521 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973043' 00:11:13.521 killing process with pid 1973043 00:11:13.521 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1973043 00:11:13.521 10:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1973043 00:11:14.119 00:11:14.119 real 0m2.246s 00:11:14.119 user 0m2.198s 00:11:14.119 sys 0m0.996s 00:11:14.119 10:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.119 10:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.119 ************************************ 00:11:14.119 END TEST default_locks_via_rpc 00:11:14.119 ************************************ 00:11:14.119 10:21:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:14.119 10:21:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.119 10:21:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.119 10:21:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:14.119 ************************************ 00:11:14.119 START TEST non_locking_app_on_locked_coremask 00:11:14.119 ************************************ 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1973342 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1973342 /var/tmp/spdk.sock 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1973342 ']' 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.119 10:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.119 [2024-12-09 10:21:58.747401] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:14.119 [2024-12-09 10:21:58.747512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973342 ] 00:11:14.407 [2024-12-09 10:21:58.883566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.407 [2024-12-09 10:21:58.989138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1973474 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1973474 /var/tmp/spdk2.sock 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1973474 ']' 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:14.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.988 10:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.988 [2024-12-09 10:21:59.586194] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:14.988 [2024-12-09 10:21:59.586376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973474 ] 00:11:15.247 [2024-12-09 10:21:59.850307] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:15.247 [2024-12-09 10:21:59.850380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.506 [2024-12-09 10:22:00.096804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.883 10:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.883 10:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:16.883 10:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1973342 00:11:16.883 10:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1973342 00:11:16.883 10:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:17.450 lslocks: write error 00:11:17.450 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1973342 00:11:17.450 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1973342 ']' 00:11:17.450 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1973342 00:11:17.450 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:17.450 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973342 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973342' 00:11:17.451 killing process with pid 1973342 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1973342 00:11:17.451 10:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1973342 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1973474 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1973474 ']' 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1973474 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973474 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973474' 00:11:19.356 killing process with pid 1973474 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1973474 00:11:19.356 10:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1973474 00:11:19.924 00:11:19.924 real 0m5.683s 00:11:19.924 user 0m6.249s 00:11:19.924 sys 0m1.852s 00:11:19.924 10:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.924 10:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.924 ************************************ 00:11:19.924 END TEST non_locking_app_on_locked_coremask 00:11:19.924 ************************************ 00:11:19.924 10:22:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:19.924 10:22:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.924 10:22:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.924 10:22:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:19.924 ************************************ 00:11:19.924 START TEST locking_app_on_unlocked_coremask 00:11:19.924 ************************************ 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1974042 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1974042 /var/tmp/spdk.sock 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974042 ']' 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.924 10:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.924 [2024-12-09 10:22:04.554764] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:19.924 [2024-12-09 10:22:04.554961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974042 ] 00:11:20.182 [2024-12-09 10:22:04.726522] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:20.182 [2024-12-09 10:22:04.726615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.441 [2024-12-09 10:22:04.848145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1974172 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1974172 /var/tmp/spdk2.sock 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974172 ']' 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:20.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.707 10:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:20.976 [2024-12-09 10:22:05.453311] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:20.977 [2024-12-09 10:22:05.453493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974172 ] 00:11:21.236 [2024-12-09 10:22:05.726095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.495 [2024-12-09 10:22:05.968014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.426 10:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.426 10:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:22.426 10:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1974172 00:11:22.426 10:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1974172 00:11:22.426 10:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:22.685 lslocks: write error 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1974042 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1974042 ']' 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1974042 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974042 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974042' 00:11:22.685 killing process with pid 1974042 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1974042 00:11:22.685 10:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1974042 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1974172 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1974172 ']' 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1974172 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974172 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974172' 00:11:24.586 killing process with pid 1974172 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1974172 00:11:24.586 10:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1974172 00:11:25.153 00:11:25.153 real 0m5.129s 00:11:25.153 user 0m5.625s 00:11:25.153 sys 0m1.640s 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.153 ************************************ 00:11:25.153 END TEST locking_app_on_unlocked_coremask 00:11:25.153 ************************************ 00:11:25.153 10:22:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:25.153 10:22:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:25.153 10:22:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.153 10:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:25.153 ************************************ 00:11:25.153 START TEST locking_app_on_locked_coremask 00:11:25.153 ************************************ 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1974731 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1974731 /var/tmp/spdk.sock 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974731 ']' 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.153 10:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.153 [2024-12-09 10:22:09.765546] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:25.153 [2024-12-09 10:22:09.765751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974731 ] 00:11:25.412 [2024-12-09 10:22:09.936173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.412 [2024-12-09 10:22:10.053741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1974750 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1974750 /var/tmp/spdk2.sock 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1974750 /var/tmp/spdk2.sock 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1974750 /var/tmp/spdk2.sock 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974750 ']' 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:25.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.979 10:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.979 [2024-12-09 10:22:10.609442] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:25.980 [2024-12-09 10:22:10.609558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974750 ] 00:11:26.238 [2024-12-09 10:22:10.818391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1974731 has claimed it. 00:11:26.238 [2024-12-09 10:22:10.818523] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:27.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1974750) - No such process 00:11:27.176 ERROR: process (pid: 1974750) is no longer running 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1974731 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1974731 00:11:27.176 10:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:27.435 lslocks: write error 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1974731 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1974731 ']' 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1974731 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974731 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.435 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.436 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974731' 00:11:27.436 killing process with pid 1974731 00:11:27.436 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1974731 00:11:27.436 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1974731 00:11:28.374 00:11:28.374 real 0m3.176s 00:11:28.374 user 0m3.634s 00:11:28.374 sys 0m1.175s 00:11:28.374 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.374 10:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 ************************************ 00:11:28.374 END TEST locking_app_on_locked_coremask 00:11:28.374 ************************************ 00:11:28.374 10:22:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:28.374 10:22:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.374 10:22:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.374 10:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 ************************************ 00:11:28.374 START TEST locking_overlapped_coremask 00:11:28.374 ************************************ 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1975047 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1975047 /var/tmp/spdk.sock 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975047 ']' 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.374 10:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 [2024-12-09 10:22:13.017968] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:28.374 [2024-12-09 10:22:13.018164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975047 ] 00:11:28.634 [2024-12-09 10:22:13.181035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.893 [2024-12-09 10:22:13.306882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.893 [2024-12-09 10:22:13.306995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.893 [2024-12-09 10:22:13.307005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1975177 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1975177 /var/tmp/spdk2.sock 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1975177 /var/tmp/spdk2.sock 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1975177 /var/tmp/spdk2.sock 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975177 ']' 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:29.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.153 10:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:29.413 [2024-12-09 10:22:13.855989] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:29.413 [2024-12-09 10:22:13.856163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975177 ] 00:11:29.673 [2024-12-09 10:22:14.086975] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975047 has claimed it. 00:11:29.673 [2024-12-09 10:22:14.087088] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:30.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1975177) - No such process 00:11:30.238 ERROR: process (pid: 1975177) is no longer running 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1975047 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1975047 ']' 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1975047 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.238 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975047 00:11:30.496 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.496 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.496 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975047' 00:11:30.496 killing process with pid 1975047 00:11:30.496 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1975047 00:11:30.496 10:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1975047 00:11:31.060 00:11:31.060 real 0m2.668s 00:11:31.060 user 0m7.456s 00:11:31.060 sys 0m0.823s 00:11:31.060 10:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.060 10:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.060 ************************************ 00:11:31.060 END TEST locking_overlapped_coremask 00:11:31.060 ************************************ 00:11:31.060 10:22:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:31.060 10:22:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.060 10:22:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.060 10:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:31.060 ************************************ 00:11:31.060 START TEST locking_overlapped_coremask_via_rpc 00:11:31.061 ************************************ 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1975464 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1975464 /var/tmp/spdk.sock 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975464 ']' 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.061 10:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.318 [2024-12-09 10:22:15.768477] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:31.318 [2024-12-09 10:22:15.768669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975464 ] 00:11:31.318 [2024-12-09 10:22:15.927748] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:31.318 [2024-12-09 10:22:15.927842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.576 [2024-12-09 10:22:16.057954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.576 [2024-12-09 10:22:16.058054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.576 [2024-12-09 10:22:16.058063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1975483 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1975483 /var/tmp/spdk2.sock 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975483 ']' 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:32.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.141 10:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.141 [2024-12-09 10:22:16.584452] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:32.141 [2024-12-09 10:22:16.584554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975483 ] 00:11:32.141 [2024-12-09 10:22:16.782220] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:32.141 [2024-12-09 10:22:16.782321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.399 [2024-12-09 10:22:16.999889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.399 [2024-12-09 10:22:17.003775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.399 [2024-12-09 10:22:17.003779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.337 [2024-12-09 10:22:17.842827] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975464 has claimed it. 00:11:33.337 request: 00:11:33.337 { 00:11:33.337 "method": "framework_enable_cpumask_locks", 00:11:33.337 "req_id": 1 00:11:33.337 } 00:11:33.337 Got JSON-RPC error response 00:11:33.337 response: 00:11:33.337 { 00:11:33.337 "code": -32603, 00:11:33.337 "message": "Failed to claim CPU core: 2" 00:11:33.337 } 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1975464 /var/tmp/spdk.sock 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975464 ']' 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.337 10:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1975483 /var/tmp/spdk2.sock 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975483 ']' 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:33.595 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.853 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:33.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:33.853 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.853 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:34.111 00:11:34.111 real 0m2.939s 00:11:34.111 user 0m1.882s 00:11:34.111 sys 0m0.248s 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.111 10:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.111 ************************************ 00:11:34.111 END TEST locking_overlapped_coremask_via_rpc 00:11:34.111 ************************************ 00:11:34.111 10:22:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:34.111 10:22:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975464 ]] 00:11:34.111 10:22:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975464 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975464 ']' 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975464 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975464 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975464' 00:11:34.111 killing process with pid 1975464 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1975464 00:11:34.111 10:22:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1975464 00:11:34.680 10:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975483 ]] 00:11:34.680 10:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975483 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975483 ']' 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975483 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975483 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975483' 00:11:34.680 killing process with pid 1975483 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1975483 00:11:34.680 10:22:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1975483 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975464 ]] 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975464 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975464 ']' 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975464 00:11:35.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1975464) - No such process 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1975464 is not found' 00:11:35.250 Process with pid 1975464 is not found 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975483 ]] 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975483 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975483 ']' 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975483 00:11:35.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1975483) - No such process 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1975483 is not found' 00:11:35.250 Process with pid 1975483 is not found 00:11:35.250 10:22:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:35.250 00:11:35.250 real 0m25.788s 00:11:35.250 user 0m43.912s 00:11:35.250 sys 0m9.068s 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.250 10:22:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.250 ************************************ 00:11:35.250 END TEST cpu_locks 00:11:35.250 ************************************ 00:11:35.250 00:11:35.250 real 0m59.657s 00:11:35.250 user 1m54.733s 00:11:35.250 sys 0m15.925s 00:11:35.250 10:22:19 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.250 10:22:19 event -- common/autotest_common.sh@10 -- # set +x 00:11:35.250 ************************************ 00:11:35.250 END TEST event 00:11:35.250 ************************************ 00:11:35.250 10:22:19 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:35.250 10:22:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.250 10:22:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.250 10:22:19 -- common/autotest_common.sh@10 -- # set +x 00:11:35.250 ************************************ 00:11:35.250 START TEST thread 00:11:35.250 ************************************ 00:11:35.250 10:22:19 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:11:35.250 * Looking for test storage... 00:11:35.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:11:35.509 10:22:19 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.509 10:22:19 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.509 10:22:19 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.509 10:22:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.509 10:22:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.509 10:22:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.509 10:22:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.509 10:22:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.509 10:22:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.509 10:22:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.509 10:22:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.509 10:22:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.509 10:22:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.509 10:22:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.509 10:22:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:35.509 10:22:20 thread -- scripts/common.sh@345 -- # : 1 00:11:35.509 10:22:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.509 10:22:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.509 10:22:20 thread -- scripts/common.sh@365 -- # decimal 1 00:11:35.509 10:22:20 thread -- scripts/common.sh@353 -- # local d=1 00:11:35.509 10:22:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.509 10:22:20 thread -- scripts/common.sh@355 -- # echo 1 00:11:35.509 10:22:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.509 10:22:20 thread -- scripts/common.sh@366 -- # decimal 2 00:11:35.509 10:22:20 thread -- scripts/common.sh@353 -- # local d=2 00:11:35.509 10:22:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.509 10:22:20 thread -- scripts/common.sh@355 -- # echo 2 00:11:35.509 10:22:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.509 10:22:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.509 10:22:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.509 10:22:20 thread -- scripts/common.sh@368 -- # return 0 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.509 --rc genhtml_branch_coverage=1 00:11:35.509 --rc genhtml_function_coverage=1 00:11:35.509 --rc genhtml_legend=1 00:11:35.509 --rc geninfo_all_blocks=1 00:11:35.509 --rc geninfo_unexecuted_blocks=1 00:11:35.509 00:11:35.509 ' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.509 --rc genhtml_branch_coverage=1 00:11:35.509 --rc genhtml_function_coverage=1 00:11:35.509 --rc genhtml_legend=1 00:11:35.509 --rc geninfo_all_blocks=1 00:11:35.509 --rc geninfo_unexecuted_blocks=1 00:11:35.509 00:11:35.509 ' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.509 --rc genhtml_branch_coverage=1 00:11:35.509 --rc genhtml_function_coverage=1 00:11:35.509 --rc genhtml_legend=1 00:11:35.509 --rc geninfo_all_blocks=1 00:11:35.509 --rc geninfo_unexecuted_blocks=1 00:11:35.509 00:11:35.509 ' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.509 --rc genhtml_branch_coverage=1 00:11:35.509 --rc genhtml_function_coverage=1 00:11:35.509 --rc genhtml_legend=1 00:11:35.509 --rc geninfo_all_blocks=1 00:11:35.509 --rc geninfo_unexecuted_blocks=1 00:11:35.509 00:11:35.509 ' 00:11:35.509 10:22:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.509 10:22:20 thread -- common/autotest_common.sh@10 -- # set +x 00:11:35.509 ************************************ 00:11:35.509 START TEST thread_poller_perf 00:11:35.509 ************************************ 00:11:35.509 10:22:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:35.509 [2024-12-09 10:22:20.137362] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:35.509 [2024-12-09 10:22:20.137446] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976055 ] 00:11:35.768 [2024-12-09 10:22:20.270650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.768 [2024-12-09 10:22:20.397550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.768 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:37.166 [2024-12-09T09:22:21.820Z] ====================================== 00:11:37.166 [2024-12-09T09:22:21.820Z] busy:2733091992 (cyc) 00:11:37.166 [2024-12-09T09:22:21.820Z] total_run_count: 133000 00:11:37.166 [2024-12-09T09:22:21.820Z] tsc_hz: 2700000000 (cyc) 00:11:37.166 [2024-12-09T09:22:21.820Z] ====================================== 00:11:37.166 [2024-12-09T09:22:21.820Z] poller_cost: 20549 (cyc), 7610 (nsec) 00:11:37.166 00:11:37.166 real 0m1.416s 00:11:37.166 user 0m1.294s 00:11:37.166 sys 0m0.111s 00:11:37.166 10:22:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.166 10:22:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:37.166 ************************************ 00:11:37.166 END TEST thread_poller_perf 00:11:37.166 ************************************ 00:11:37.166 10:22:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:37.166 10:22:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:37.166 10:22:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.166 10:22:21 thread -- common/autotest_common.sh@10 -- # set +x 00:11:37.166 ************************************ 00:11:37.166 START TEST thread_poller_perf 00:11:37.166 ************************************ 00:11:37.166 10:22:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:37.166 [2024-12-09 10:22:21.643191] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:37.166 [2024-12-09 10:22:21.643338] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976264 ] 00:11:37.166 [2024-12-09 10:22:21.802523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.424 [2024-12-09 10:22:21.919855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.424 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:38.798 [2024-12-09T09:22:23.452Z] ====================================== 00:11:38.798 [2024-12-09T09:22:23.452Z] busy:2704360941 (cyc) 00:11:38.798 [2024-12-09T09:22:23.452Z] total_run_count: 1617000 00:11:38.798 [2024-12-09T09:22:23.452Z] tsc_hz: 2700000000 (cyc) 00:11:38.798 [2024-12-09T09:22:23.452Z] ====================================== 00:11:38.798 [2024-12-09T09:22:23.452Z] poller_cost: 1672 (cyc), 619 (nsec) 00:11:38.798 00:11:38.798 real 0m1.420s 00:11:38.798 user 0m1.272s 00:11:38.798 sys 0m0.135s 00:11:38.798 10:22:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.798 10:22:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:38.798 ************************************ 00:11:38.798 END TEST thread_poller_perf 00:11:38.798 ************************************ 00:11:38.798 10:22:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:38.798 00:11:38.798 real 0m3.258s 00:11:38.798 user 0m2.820s 00:11:38.798 sys 0m0.436s 00:11:38.798 10:22:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.798 10:22:23 thread -- common/autotest_common.sh@10 -- # set +x 00:11:38.798 ************************************ 00:11:38.798 END TEST thread 00:11:38.798 ************************************ 00:11:38.799 10:22:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:38.799 10:22:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:38.799 10:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.799 10:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.799 10:22:23 -- common/autotest_common.sh@10 -- # set +x 00:11:38.799 ************************************ 00:11:38.799 START TEST app_cmdline 00:11:38.799 ************************************ 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:11:38.799 * Looking for test storage... 00:11:38.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.799 10:22:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.799 --rc genhtml_branch_coverage=1 00:11:38.799 --rc genhtml_function_coverage=1 00:11:38.799 --rc genhtml_legend=1 00:11:38.799 --rc geninfo_all_blocks=1 00:11:38.799 --rc geninfo_unexecuted_blocks=1 00:11:38.799 00:11:38.799 ' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.799 --rc genhtml_branch_coverage=1 00:11:38.799 --rc genhtml_function_coverage=1 00:11:38.799 --rc genhtml_legend=1 00:11:38.799 --rc geninfo_all_blocks=1 00:11:38.799 --rc geninfo_unexecuted_blocks=1 00:11:38.799 00:11:38.799 ' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.799 --rc genhtml_branch_coverage=1 00:11:38.799 --rc genhtml_function_coverage=1 00:11:38.799 --rc genhtml_legend=1 00:11:38.799 --rc geninfo_all_blocks=1 00:11:38.799 --rc geninfo_unexecuted_blocks=1 00:11:38.799 00:11:38.799 ' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.799 --rc genhtml_branch_coverage=1 00:11:38.799 --rc genhtml_function_coverage=1 00:11:38.799 --rc genhtml_legend=1 00:11:38.799 --rc geninfo_all_blocks=1 00:11:38.799 --rc geninfo_unexecuted_blocks=1 00:11:38.799 00:11:38.799 ' 00:11:38.799 10:22:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:38.799 10:22:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1976471 00:11:38.799 10:22:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:38.799 10:22:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1976471 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1976471 ']' 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.799 10:22:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:39.057 [2024-12-09 10:22:23.563401] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:39.057 [2024-12-09 10:22:23.563589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976471 ] 00:11:39.316 [2024-12-09 10:22:23.727292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.316 [2024-12-09 10:22:23.847923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.883 10:22:24 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.883 10:22:24 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:39.883 10:22:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:40.447 { 00:11:40.447 "version": "SPDK v25.01-pre git sha1 b7d7c4b24", 00:11:40.447 "fields": { 00:11:40.447 "major": 25, 00:11:40.447 "minor": 1, 00:11:40.447 "patch": 0, 00:11:40.447 "suffix": "-pre", 00:11:40.447 "commit": "b7d7c4b24" 00:11:40.447 } 00:11:40.447 } 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:40.447 10:22:24 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:40.447 10:22:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:40.447 10:22:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:40.447 10:22:24 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.447 10:22:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:40.447 10:22:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:40.447 10:22:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:40.447 10:22:25 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:41.380 request: 00:11:41.380 { 00:11:41.380 "method": "env_dpdk_get_mem_stats", 00:11:41.380 "req_id": 1 00:11:41.380 } 00:11:41.380 Got JSON-RPC error response 00:11:41.380 response: 00:11:41.380 { 00:11:41.380 "code": -32601, 00:11:41.380 "message": "Method not found" 00:11:41.380 } 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.380 10:22:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1976471 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1976471 ']' 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1976471 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976471 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976471' 00:11:41.380 killing process with pid 1976471 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 1976471 00:11:41.380 10:22:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 1976471 00:11:41.945 00:11:41.945 real 0m3.208s 00:11:41.945 user 0m4.112s 00:11:41.945 sys 0m0.910s 00:11:41.945 10:22:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.946 10:22:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 ************************************ 00:11:41.946 END TEST app_cmdline 00:11:41.946 ************************************ 00:11:41.946 10:22:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:41.946 10:22:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.946 10:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.946 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 ************************************ 00:11:41.946 START TEST version 00:11:41.946 ************************************ 00:11:41.946 10:22:26 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:11:41.946 * Looking for test storage... 00:11:41.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:41.946 10:22:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.946 10:22:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.946 10:22:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.206 10:22:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.206 10:22:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.206 10:22:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.206 10:22:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.206 10:22:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.206 10:22:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.206 10:22:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.206 10:22:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.206 10:22:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.206 10:22:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.206 10:22:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.206 10:22:26 version -- scripts/common.sh@344 -- # case "$op" in 00:11:42.206 10:22:26 version -- scripts/common.sh@345 -- # : 1 00:11:42.206 10:22:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.206 10:22:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.206 10:22:26 version -- scripts/common.sh@365 -- # decimal 1 00:11:42.206 10:22:26 version -- scripts/common.sh@353 -- # local d=1 00:11:42.206 10:22:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.206 10:22:26 version -- scripts/common.sh@355 -- # echo 1 00:11:42.206 10:22:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.206 10:22:26 version -- scripts/common.sh@366 -- # decimal 2 00:11:42.206 10:22:26 version -- scripts/common.sh@353 -- # local d=2 00:11:42.206 10:22:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.206 10:22:26 version -- scripts/common.sh@355 -- # echo 2 00:11:42.206 10:22:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.206 10:22:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.206 10:22:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.206 10:22:26 version -- scripts/common.sh@368 -- # return 0 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.206 --rc genhtml_branch_coverage=1 00:11:42.206 --rc genhtml_function_coverage=1 00:11:42.206 --rc genhtml_legend=1 00:11:42.206 --rc geninfo_all_blocks=1 00:11:42.206 --rc geninfo_unexecuted_blocks=1 00:11:42.206 00:11:42.206 ' 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.206 --rc genhtml_branch_coverage=1 00:11:42.206 --rc genhtml_function_coverage=1 00:11:42.206 --rc genhtml_legend=1 00:11:42.206 --rc geninfo_all_blocks=1 00:11:42.206 --rc geninfo_unexecuted_blocks=1 00:11:42.206 00:11:42.206 ' 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.206 --rc genhtml_branch_coverage=1 00:11:42.206 --rc genhtml_function_coverage=1 00:11:42.206 --rc genhtml_legend=1 00:11:42.206 --rc geninfo_all_blocks=1 00:11:42.206 --rc geninfo_unexecuted_blocks=1 00:11:42.206 00:11:42.206 ' 00:11:42.206 10:22:26 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.206 --rc genhtml_branch_coverage=1 00:11:42.206 --rc genhtml_function_coverage=1 00:11:42.206 --rc genhtml_legend=1 00:11:42.206 --rc geninfo_all_blocks=1 00:11:42.206 --rc geninfo_unexecuted_blocks=1 00:11:42.206 00:11:42.206 ' 00:11:42.206 10:22:26 version -- app/version.sh@17 -- # get_header_version major 00:11:42.207 10:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # cut -f2 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.207 10:22:26 version -- app/version.sh@17 -- # major=25 00:11:42.207 10:22:26 version -- app/version.sh@18 -- # get_header_version minor 00:11:42.207 10:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # cut -f2 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.207 10:22:26 version -- app/version.sh@18 -- # minor=1 00:11:42.207 10:22:26 version -- app/version.sh@19 -- # get_header_version patch 00:11:42.207 10:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # cut -f2 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.207 10:22:26 version -- app/version.sh@19 -- # patch=0 00:11:42.207 10:22:26 version -- app/version.sh@20 -- # get_header_version suffix 00:11:42.207 10:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # cut -f2 00:11:42.207 10:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:11:42.207 10:22:26 version -- app/version.sh@20 -- # suffix=-pre 00:11:42.207 10:22:26 version -- app/version.sh@22 -- # version=25.1 00:11:42.207 10:22:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:42.207 10:22:26 version -- app/version.sh@28 -- # version=25.1rc0 00:11:42.207 10:22:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:42.207 10:22:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:42.207 10:22:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:42.207 10:22:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:42.207 00:11:42.207 real 0m0.313s 00:11:42.207 user 0m0.204s 00:11:42.207 sys 0m0.136s 00:11:42.207 10:22:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.207 10:22:26 version -- common/autotest_common.sh@10 -- # set +x 00:11:42.207 ************************************ 00:11:42.207 END TEST version 00:11:42.207 ************************************ 00:11:42.207 10:22:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:42.207 10:22:26 -- spdk/autotest.sh@194 -- # uname -s 00:11:42.207 10:22:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:42.207 10:22:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:42.207 10:22:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:42.207 10:22:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:42.207 10:22:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.207 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:42.207 10:22:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:42.207 10:22:26 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:42.207 10:22:26 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:42.207 10:22:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.207 10:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.207 10:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 ************************************ 00:11:42.466 START TEST nvmf_tcp 00:11:42.466 ************************************ 00:11:42.466 10:22:26 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:42.466 * Looking for test storage... 00:11:42.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:42.466 10:22:26 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.466 10:22:26 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.466 10:22:26 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.466 10:22:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.466 --rc genhtml_branch_coverage=1 00:11:42.466 --rc genhtml_function_coverage=1 00:11:42.466 --rc genhtml_legend=1 00:11:42.466 --rc geninfo_all_blocks=1 00:11:42.466 --rc geninfo_unexecuted_blocks=1 00:11:42.466 00:11:42.466 ' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.466 --rc genhtml_branch_coverage=1 00:11:42.466 --rc genhtml_function_coverage=1 00:11:42.466 --rc genhtml_legend=1 00:11:42.466 --rc geninfo_all_blocks=1 00:11:42.466 --rc geninfo_unexecuted_blocks=1 00:11:42.466 00:11:42.466 ' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.466 --rc genhtml_branch_coverage=1 00:11:42.466 --rc genhtml_function_coverage=1 00:11:42.466 --rc genhtml_legend=1 00:11:42.466 --rc geninfo_all_blocks=1 00:11:42.466 --rc geninfo_unexecuted_blocks=1 00:11:42.466 00:11:42.466 ' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.466 --rc genhtml_branch_coverage=1 00:11:42.466 --rc genhtml_function_coverage=1 00:11:42.466 --rc genhtml_legend=1 00:11:42.466 --rc geninfo_all_blocks=1 00:11:42.466 --rc geninfo_unexecuted_blocks=1 00:11:42.466 00:11:42.466 ' 00:11:42.466 10:22:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:42.466 10:22:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:42.466 10:22:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.466 10:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.466 ************************************ 00:11:42.466 START TEST nvmf_target_core 00:11:42.466 ************************************ 00:11:42.466 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:42.726 * Looking for test storage... 00:11:42.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.726 --rc genhtml_branch_coverage=1 00:11:42.726 --rc genhtml_function_coverage=1 00:11:42.726 --rc genhtml_legend=1 00:11:42.726 --rc geninfo_all_blocks=1 00:11:42.726 --rc geninfo_unexecuted_blocks=1 00:11:42.726 00:11:42.726 ' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.726 --rc genhtml_branch_coverage=1 00:11:42.726 --rc genhtml_function_coverage=1 00:11:42.726 --rc genhtml_legend=1 00:11:42.726 --rc geninfo_all_blocks=1 00:11:42.726 --rc geninfo_unexecuted_blocks=1 00:11:42.726 00:11:42.726 ' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.726 --rc genhtml_branch_coverage=1 00:11:42.726 --rc genhtml_function_coverage=1 00:11:42.726 --rc genhtml_legend=1 00:11:42.726 --rc geninfo_all_blocks=1 00:11:42.726 --rc geninfo_unexecuted_blocks=1 00:11:42.726 00:11:42.726 ' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.726 --rc genhtml_branch_coverage=1 00:11:42.726 --rc genhtml_function_coverage=1 00:11:42.726 --rc genhtml_legend=1 00:11:42.726 --rc geninfo_all_blocks=1 00:11:42.726 --rc geninfo_unexecuted_blocks=1 00:11:42.726 00:11:42.726 ' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.726 10:22:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:42.985 ************************************ 00:11:42.985 START TEST nvmf_abort 00:11:42.985 ************************************ 00:11:42.985 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:42.985 * Looking for test storage... 00:11:42.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.985 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.985 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.985 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.245 --rc genhtml_branch_coverage=1 00:11:43.245 --rc genhtml_function_coverage=1 00:11:43.245 --rc genhtml_legend=1 00:11:43.245 --rc geninfo_all_blocks=1 00:11:43.245 --rc geninfo_unexecuted_blocks=1 00:11:43.245 00:11:43.245 ' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.245 --rc genhtml_branch_coverage=1 00:11:43.245 --rc genhtml_function_coverage=1 00:11:43.245 --rc genhtml_legend=1 00:11:43.245 --rc geninfo_all_blocks=1 00:11:43.245 --rc geninfo_unexecuted_blocks=1 00:11:43.245 00:11:43.245 ' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.245 --rc genhtml_branch_coverage=1 00:11:43.245 --rc genhtml_function_coverage=1 00:11:43.245 --rc genhtml_legend=1 00:11:43.245 --rc geninfo_all_blocks=1 00:11:43.245 --rc geninfo_unexecuted_blocks=1 00:11:43.245 00:11:43.245 ' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.245 --rc genhtml_branch_coverage=1 00:11:43.245 --rc genhtml_function_coverage=1 00:11:43.245 --rc genhtml_legend=1 00:11:43.245 --rc geninfo_all_blocks=1 00:11:43.245 --rc geninfo_unexecuted_blocks=1 00:11:43.245 00:11:43.245 ' 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:43.245 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.246 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:46.610 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:46.610 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.610 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:46.611 Found net devices under 0000:84:00.0: cvl_0_0 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:46.611 Found net devices under 0000:84:00.1: cvl_0_1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:11:46.611 00:11:46.611 --- 10.0.0.2 ping statistics --- 00:11:46.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.611 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:46.611 00:11:46.611 --- 10.0.0.1 ping statistics --- 00:11:46.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.611 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1978854 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1978854 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1978854 ']' 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.611 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.611 [2024-12-09 10:22:30.843005] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:46.611 [2024-12-09 10:22:30.843111] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.611 [2024-12-09 10:22:31.000872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:46.611 [2024-12-09 10:22:31.119700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.611 [2024-12-09 10:22:31.119826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.611 [2024-12-09 10:22:31.119843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.611 [2024-12-09 10:22:31.119856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.611 [2024-12-09 10:22:31.119868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.611 [2024-12-09 10:22:31.123034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.611 [2024-12-09 10:22:31.123140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.611 [2024-12-09 10:22:31.123144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.910 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 [2024-12-09 10:22:31.304998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 Malloc0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 Delay0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 [2024-12-09 10:22:31.384366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.911 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:47.169 [2024-12-09 10:22:31.571853] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:49.069 Initializing NVMe Controllers 00:11:49.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:49.069 controller IO queue size 128 less than required 00:11:49.069 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:49.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:49.069 Initialization complete. Launching workers. 00:11:49.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27034 00:11:49.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27099, failed to submit 62 00:11:49.069 success 27038, unsuccessful 61, failed 0 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.069 rmmod nvme_tcp 00:11:49.069 rmmod nvme_fabrics 00:11:49.069 rmmod nvme_keyring 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1978854 ']' 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1978854 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1978854 ']' 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1978854 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978854 00:11:49.069 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:49.070 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:49.070 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978854' 00:11:49.070 killing process with pid 1978854 00:11:49.070 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1978854 00:11:49.070 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1978854 00:11:49.638 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.639 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.543 00:11:51.543 real 0m8.720s 00:11:51.543 user 0m11.494s 00:11:51.543 sys 0m3.544s 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:51.543 ************************************ 00:11:51.543 END TEST nvmf_abort 00:11:51.543 ************************************ 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:51.543 ************************************ 00:11:51.543 START TEST nvmf_ns_hotplug_stress 00:11:51.543 ************************************ 00:11:51.543 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:51.803 * Looking for test storage... 00:11:51.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.803 --rc genhtml_branch_coverage=1 00:11:51.803 --rc genhtml_function_coverage=1 00:11:51.803 --rc genhtml_legend=1 00:11:51.803 --rc geninfo_all_blocks=1 00:11:51.803 --rc geninfo_unexecuted_blocks=1 00:11:51.803 00:11:51.803 ' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.803 --rc genhtml_branch_coverage=1 00:11:51.803 --rc genhtml_function_coverage=1 00:11:51.803 --rc genhtml_legend=1 00:11:51.803 --rc geninfo_all_blocks=1 00:11:51.803 --rc geninfo_unexecuted_blocks=1 00:11:51.803 00:11:51.803 ' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.803 --rc genhtml_branch_coverage=1 00:11:51.803 --rc genhtml_function_coverage=1 00:11:51.803 --rc genhtml_legend=1 00:11:51.803 --rc geninfo_all_blocks=1 00:11:51.803 --rc geninfo_unexecuted_blocks=1 00:11:51.803 00:11:51.803 ' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.803 --rc genhtml_branch_coverage=1 00:11:51.803 --rc genhtml_function_coverage=1 00:11:51.803 --rc genhtml_legend=1 00:11:51.803 --rc geninfo_all_blocks=1 00:11:51.803 --rc geninfo_unexecuted_blocks=1 00:11:51.803 00:11:51.803 ' 00:11:51.803 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.063 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.064 10:22:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:55.351 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:55.351 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:55.351 Found net devices under 0000:84:00.0: cvl_0_0 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:55.351 Found net devices under 0000:84:00.1: cvl_0_1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:11:55.351 00:11:55.351 --- 10.0.0.2 ping statistics --- 00:11:55.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.351 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:55.351 00:11:55.351 --- 10.0.0.1 ping statistics --- 00:11:55.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.351 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1981359 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1981359 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1981359 ']' 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.351 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.352 [2024-12-09 10:22:39.862702] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:11:55.352 [2024-12-09 10:22:39.862829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.609 [2024-12-09 10:22:40.008049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.609 [2024-12-09 10:22:40.115461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.609 [2024-12-09 10:22:40.115577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.609 [2024-12-09 10:22:40.115612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.609 [2024-12-09 10:22:40.115641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.609 [2024-12-09 10:22:40.115667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.609 [2024-12-09 10:22:40.118927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.609 [2024-12-09 10:22:40.119032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.609 [2024-12-09 10:22:40.119036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:55.866 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.124 [2024-12-09 10:22:40.697759] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.124 10:22:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:57.057 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.315 [2024-12-09 10:22:41.722076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.315 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.882 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:58.461 Malloc0 00:11:58.461 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:59.028 Delay0 00:11:59.028 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.597 10:22:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:00.174 NULL1 00:12:00.174 10:22:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.742 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1982038 00:12:00.742 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:00.742 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:00.742 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.117 Read completed with error (sct=0, sc=11) 00:12:02.117 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.376 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:02.376 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:02.944 true 00:12:02.944 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:02.944 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:03.717 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:03.717 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:04.282 true 00:12:04.282 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:04.282 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.849 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.367 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:05.367 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:05.933 true 00:12:05.933 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:05.933 10:22:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.313 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.883 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:07.883 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:08.142 true 00:12:08.142 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:08.142 10:22:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.520 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.780 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:09.780 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:10.347 true 00:12:10.347 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:10.347 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.606 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:10.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.124 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:11.124 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:11.708 true 00:12:11.708 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:11.708 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.966 10:22:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:12.793 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:12.793 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:13.051 true 00:12:13.051 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:13.051 10:22:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.426 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:14.941 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:14.941 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:15.505 true 00:12:15.505 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:15.505 10:22:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.762 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:16.277 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:16.277 10:23:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:16.844 true 00:12:16.844 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:16.844 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.412 10:23:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.671 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:17.671 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:18.240 true 00:12:18.240 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:18.240 10:23:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:19.651 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.935 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:19.935 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:20.500 true 00:12:20.500 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:20.501 10:23:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.134 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:22.134 10:23:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:22.699 true 00:12:22.699 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:22.699 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.274 10:23:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:23.532 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:23.532 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:24.097 true 00:12:24.097 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:24.097 10:23:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.661 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:24.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:25.228 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:25.228 10:23:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:25.486 true 00:12:25.486 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:25.486 10:23:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.862 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.378 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:27.378 10:23:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:27.945 true 00:12:27.945 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:27.945 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 10:23:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:28.770 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:28.770 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:29.337 true 00:12:29.337 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:29.337 10:23:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.902 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.470 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:30.470 10:23:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:31.034 true 00:12:31.034 Initializing NVMe Controllers 00:12:31.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.034 Controller IO queue size 128, less than required. 00:12:31.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.034 Controller IO queue size 128, less than required. 00:12:31.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:31.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:31.034 Initialization complete. Launching workers. 00:12:31.034 ======================================================== 00:12:31.034 Latency(us) 00:12:31.034 Device Information : IOPS MiB/s Average min max 00:12:31.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4955.52 2.42 20003.41 2260.35 1013925.22 00:12:31.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15756.50 7.69 8123.62 2803.79 446818.23 00:12:31.034 ======================================================== 00:12:31.034 Total : 20712.02 10.11 10965.95 2260.35 1013925.22 00:12:31.034 00:12:31.034 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1982038 00:12:31.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1982038) - No such process 00:12:31.034 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1982038 00:12:31.035 10:23:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.601 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:32.168 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:32.168 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:32.168 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:32.168 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.168 10:23:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:32.734 null0 00:12:32.734 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:32.734 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.734 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:32.992 null1 00:12:32.992 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:32.992 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.992 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:33.250 null2 00:12:33.250 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.250 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.250 10:23:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:33.816 null3 00:12:33.816 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.816 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.816 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:34.381 null4 00:12:34.381 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:34.381 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:34.381 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:34.638 null5 00:12:34.638 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:34.639 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:34.639 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:35.204 null6 00:12:35.204 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:35.204 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:35.204 10:23:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:35.462 null7 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:35.462 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1986090 1986091 1986093 1986097 1986101 1986103 1986105 1986107 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.463 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:35.721 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:35.721 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.979 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.238 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:36.496 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:36.496 10:23:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.754 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.013 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.272 10:23:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.545 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.803 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.804 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:37.804 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:38.061 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:38.061 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.062 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.062 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.319 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.320 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:38.320 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.320 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.320 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:38.320 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:38.577 10:23:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:38.577 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:38.836 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.836 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.836 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:38.836 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:38.836 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.094 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:39.351 10:23:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.609 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:39.884 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:40.140 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.140 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.140 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:40.140 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:40.397 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.397 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.397 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.397 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.661 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:40.941 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:41.198 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:41.455 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.455 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.455 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:41.455 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.455 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.455 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.455 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.455 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:41.456 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.713 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:41.971 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:42.230 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:42.230 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:42.230 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:42.230 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:42.230 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.489 rmmod nvme_tcp 00:12:42.489 rmmod nvme_fabrics 00:12:42.489 rmmod nvme_keyring 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1981359 ']' 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1981359 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1981359 ']' 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1981359 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.489 10:23:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981359 00:12:42.489 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:42.489 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:42.489 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981359' 00:12:42.489 killing process with pid 1981359 00:12:42.489 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1981359 00:12:42.489 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1981359 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.748 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.293 00:12:45.293 real 0m53.250s 00:12:45.293 user 4m1.701s 00:12:45.293 sys 0m18.470s 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.293 ************************************ 00:12:45.293 END TEST nvmf_ns_hotplug_stress 00:12:45.293 ************************************ 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:45.293 ************************************ 00:12:45.293 START TEST nvmf_delete_subsystem 00:12:45.293 ************************************ 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:45.293 * Looking for test storage... 00:12:45.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:45.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.293 --rc genhtml_branch_coverage=1 00:12:45.293 --rc genhtml_function_coverage=1 00:12:45.293 --rc genhtml_legend=1 00:12:45.293 --rc geninfo_all_blocks=1 00:12:45.293 --rc geninfo_unexecuted_blocks=1 00:12:45.293 00:12:45.293 ' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:45.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.293 --rc genhtml_branch_coverage=1 00:12:45.293 --rc genhtml_function_coverage=1 00:12:45.293 --rc genhtml_legend=1 00:12:45.293 --rc geninfo_all_blocks=1 00:12:45.293 --rc geninfo_unexecuted_blocks=1 00:12:45.293 00:12:45.293 ' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:45.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.293 --rc genhtml_branch_coverage=1 00:12:45.293 --rc genhtml_function_coverage=1 00:12:45.293 --rc genhtml_legend=1 00:12:45.293 --rc geninfo_all_blocks=1 00:12:45.293 --rc geninfo_unexecuted_blocks=1 00:12:45.293 00:12:45.293 ' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:45.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.293 --rc genhtml_branch_coverage=1 00:12:45.293 --rc genhtml_function_coverage=1 00:12:45.293 --rc genhtml_legend=1 00:12:45.293 --rc geninfo_all_blocks=1 00:12:45.293 --rc geninfo_unexecuted_blocks=1 00:12:45.293 00:12:45.293 ' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.293 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.294 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:48.600 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:48.600 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:48.600 Found net devices under 0000:84:00.0: cvl_0_0 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.600 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:48.601 Found net devices under 0000:84:00.1: cvl_0_1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:48.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:12:48.601 00:12:48.601 --- 10.0.0.2 ping statistics --- 00:12:48.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.601 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:12:48.601 00:12:48.601 --- 10.0.0.1 ping statistics --- 00:12:48.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.601 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1989145 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1989145 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1989145 ']' 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.601 10:23:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.601 [2024-12-09 10:23:32.824281] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:12:48.601 [2024-12-09 10:23:32.824394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.601 [2024-12-09 10:23:32.960777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.601 [2024-12-09 10:23:33.072803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.601 [2024-12-09 10:23:33.072864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.601 [2024-12-09 10:23:33.072880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.601 [2024-12-09 10:23:33.072894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.601 [2024-12-09 10:23:33.072905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.601 [2024-12-09 10:23:33.075530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.601 [2024-12-09 10:23:33.075545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 [2024-12-09 10:23:33.399164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 [2024-12-09 10:23:33.422719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 NULL1 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 Delay0 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1989268 00:12:48.861 10:23:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:49.119 [2024-12-09 10:23:33.532873] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:51.020 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.020 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.020 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 [2024-12-09 10:23:35.627099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa27800d020 is same with the state(6) to be set 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 starting I/O failed: -6 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Read completed with error (sct=0, sc=8) 00:12:51.020 Write completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Write completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:51.021 Read completed with error (sct=0, sc=8) 00:12:52.081 [2024-12-09 10:23:36.593456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22309b0 is same with the state(6) to be set 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 [2024-12-09 10:23:36.626769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f860 is same with the state(6) to be set 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.081 Write completed with error (sct=0, sc=8) 00:12:52.081 [2024-12-09 10:23:36.627044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa27800d350 is same with the state(6) to be set 00:12:52.081 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 [2024-12-09 10:23:36.629872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f4a0 is same with the state(6) to be set 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Write completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 Read completed with error (sct=0, sc=8) 00:12:52.082 [2024-12-09 10:23:36.630482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222f2c0 is same with the state(6) to be set 00:12:52.082 Initializing NVMe Controllers 00:12:52.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.082 Controller IO queue size 128, less than required. 00:12:52.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:52.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:52.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:52.082 Initialization complete. Launching workers. 00:12:52.082 ======================================================== 00:12:52.082 Latency(us) 00:12:52.082 Device Information : IOPS MiB/s Average min max 00:12:52.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.06 0.08 973835.16 787.00 1012296.84 00:12:52.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.69 0.08 894385.82 476.54 2001134.31 00:12:52.082 ======================================================== 00:12:52.082 Total : 324.75 0.16 935990.59 476.54 2001134.31 00:12:52.082 00:12:52.082 [2024-12-09 10:23:36.631184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22309b0 (9): Bad file descriptor 00:12:52.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:52.082 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.082 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:52.082 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1989268 00:12:52.082 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1989268 00:12:52.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1989268) - No such process 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1989268 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1989268 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1989268 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.649 [2024-12-09 10:23:37.163957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1989700 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:52.649 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:52.649 [2024-12-09 10:23:37.298277] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:53.216 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:53.216 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:53.216 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:53.783 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:53.783 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:53.783 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:54.041 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.041 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:54.041 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:54.608 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.608 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:54.608 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.174 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.174 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:55.174 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.739 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.739 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:55.739 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.003 Initializing NVMe Controllers 00:12:56.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.003 Controller IO queue size 128, less than required. 00:12:56.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:56.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:56.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:56.003 Initialization complete. Launching workers. 00:12:56.003 ======================================================== 00:12:56.003 Latency(us) 00:12:56.003 Device Information : IOPS MiB/s Average min max 00:12:56.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004799.53 1000148.16 1042177.31 00:12:56.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006268.56 1000198.96 1044005.11 00:12:56.003 ======================================================== 00:12:56.003 Total : 256.00 0.12 1005534.05 1000148.16 1044005.11 00:12:56.003 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989700 00:12:56.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1989700) - No such process 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1989700 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.263 rmmod nvme_tcp 00:12:56.263 rmmod nvme_fabrics 00:12:56.263 rmmod nvme_keyring 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1989145 ']' 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1989145 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1989145 ']' 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1989145 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1989145 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1989145' 00:12:56.263 killing process with pid 1989145 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1989145 00:12:56.263 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1989145 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:12:56.827 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.828 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.729 00:12:58.729 real 0m13.780s 00:12:58.729 user 0m29.111s 00:12:58.729 sys 0m3.901s 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 ************************************ 00:12:58.729 END TEST nvmf_delete_subsystem 00:12:58.729 ************************************ 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.729 ************************************ 00:12:58.729 START TEST nvmf_host_management 00:12:58.729 ************************************ 00:12:58.729 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:58.988 * Looking for test storage... 00:12:58.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:58.988 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.989 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.989 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.248 --rc genhtml_branch_coverage=1 00:12:59.248 --rc genhtml_function_coverage=1 00:12:59.248 --rc genhtml_legend=1 00:12:59.248 --rc geninfo_all_blocks=1 00:12:59.248 --rc geninfo_unexecuted_blocks=1 00:12:59.248 00:12:59.248 ' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.248 --rc genhtml_branch_coverage=1 00:12:59.248 --rc genhtml_function_coverage=1 00:12:59.248 --rc genhtml_legend=1 00:12:59.248 --rc geninfo_all_blocks=1 00:12:59.248 --rc geninfo_unexecuted_blocks=1 00:12:59.248 00:12:59.248 ' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.248 --rc genhtml_branch_coverage=1 00:12:59.248 --rc genhtml_function_coverage=1 00:12:59.248 --rc genhtml_legend=1 00:12:59.248 --rc geninfo_all_blocks=1 00:12:59.248 --rc geninfo_unexecuted_blocks=1 00:12:59.248 00:12:59.248 ' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.248 --rc genhtml_branch_coverage=1 00:12:59.248 --rc genhtml_function_coverage=1 00:12:59.248 --rc genhtml_legend=1 00:12:59.248 --rc geninfo_all_blocks=1 00:12:59.248 --rc geninfo_unexecuted_blocks=1 00:12:59.248 00:12:59.248 ' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.248 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.249 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.541 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:02.542 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:02.542 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:02.542 Found net devices under 0000:84:00.0: cvl_0_0 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:02.542 Found net devices under 0000:84:00.1: cvl_0_1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:13:02.542 00:13:02.542 --- 10.0.0.2 ping statistics --- 00:13:02.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.542 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:13:02.542 00:13:02.542 --- 10.0.0.1 ping statistics --- 00:13:02.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.542 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1992205 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1992205 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1992205 ']' 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.542 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.543 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.543 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.543 [2024-12-09 10:23:46.861480] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:02.543 [2024-12-09 10:23:46.861661] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.543 [2024-12-09 10:23:47.050437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.543 [2024-12-09 10:23:47.175057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.543 [2024-12-09 10:23:47.175174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.543 [2024-12-09 10:23:47.175212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.543 [2024-12-09 10:23:47.175243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.543 [2024-12-09 10:23:47.175268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.543 [2024-12-09 10:23:47.178861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.543 [2024-12-09 10:23:47.178971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.543 [2024-12-09 10:23:47.179026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:02.543 [2024-12-09 10:23:47.179029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.801 [2024-12-09 10:23:47.348216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.801 Malloc0 00:13:02.801 [2024-12-09 10:23:47.430118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.801 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1992252 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1992252 /var/tmp/bdevperf.sock 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1992252 ']' 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.060 { 00:13:03.060 "params": { 00:13:03.060 "name": "Nvme$subsystem", 00:13:03.060 "trtype": "$TEST_TRANSPORT", 00:13:03.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.060 "adrfam": "ipv4", 00:13:03.060 "trsvcid": "$NVMF_PORT", 00:13:03.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.060 "hdgst": ${hdgst:-false}, 00:13:03.060 "ddgst": ${ddgst:-false} 00:13:03.060 }, 00:13:03.060 "method": "bdev_nvme_attach_controller" 00:13:03.060 } 00:13:03.060 EOF 00:13:03.060 )") 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:13:03.060 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.060 "params": { 00:13:03.060 "name": "Nvme0", 00:13:03.060 "trtype": "tcp", 00:13:03.060 "traddr": "10.0.0.2", 00:13:03.060 "adrfam": "ipv4", 00:13:03.060 "trsvcid": "4420", 00:13:03.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:03.060 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:03.060 "hdgst": false, 00:13:03.060 "ddgst": false 00:13:03.060 }, 00:13:03.060 "method": "bdev_nvme_attach_controller" 00:13:03.060 }' 00:13:03.060 [2024-12-09 10:23:47.522600] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:03.060 [2024-12-09 10:23:47.522696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992252 ] 00:13:03.060 [2024-12-09 10:23:47.607218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.060 [2024-12-09 10:23:47.673284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.628 Running I/O for 10 seconds... 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:03.628 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.888 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.888 [2024-12-09 10:23:48.429660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.429973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.429988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.888 [2024-12-09 10:23:48.430454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.888 [2024-12-09 10:23:48.430467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.430978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.430992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.889 [2024-12-09 10:23:48.431510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.889 [2024-12-09 10:23:48.431526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.431749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.890 [2024-12-09 10:23:48.431764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.890 [2024-12-09 10:23:48.433040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.890 task offset: 75648 on job bdev=Nvme0n1 fails 00:13:03.890 00:13:03.890 Latency(us) 00:13:03.890 [2024-12-09T09:23:48.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:03.890 Job: Nvme0n1 ended in about 0.40 seconds with error 00:13:03.890 Verification LBA range: start 0x0 length 0x400 00:13:03.890 Nvme0n1 : 0.40 1430.25 89.39 158.92 0.00 39048.95 2767.08 37282.70 00:13:03.890 [2024-12-09T09:23:48.544Z] =================================================================================================================== 00:13:03.890 [2024-12-09T09:23:48.544Z] Total : 1430.25 89.39 158.92 0.00 39048.95 2767.08 37282.70 00:13:03.890 [2024-12-09 10:23:48.436045] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.890 [2024-12-09 10:23:48.436078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2429c60 (9): Bad file descriptor 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.890 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:03.890 [2024-12-09 10:23:48.445871] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1992252 00:13:04.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1992252) - No such process 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:04.824 { 00:13:04.824 "params": { 00:13:04.824 "name": "Nvme$subsystem", 00:13:04.824 "trtype": "$TEST_TRANSPORT", 00:13:04.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:04.824 "adrfam": "ipv4", 00:13:04.824 "trsvcid": "$NVMF_PORT", 00:13:04.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:04.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:04.824 "hdgst": ${hdgst:-false}, 00:13:04.824 "ddgst": ${ddgst:-false} 00:13:04.824 }, 00:13:04.824 "method": "bdev_nvme_attach_controller" 00:13:04.824 } 00:13:04.824 EOF 00:13:04.824 )") 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:13:04.824 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:04.824 "params": { 00:13:04.824 "name": "Nvme0", 00:13:04.824 "trtype": "tcp", 00:13:04.824 "traddr": "10.0.0.2", 00:13:04.824 "adrfam": "ipv4", 00:13:04.824 "trsvcid": "4420", 00:13:04.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:04.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:04.824 "hdgst": false, 00:13:04.824 "ddgst": false 00:13:04.824 }, 00:13:04.824 "method": "bdev_nvme_attach_controller" 00:13:04.824 }' 00:13:05.082 [2024-12-09 10:23:49.508388] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:05.082 [2024-12-09 10:23:49.508558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992534 ] 00:13:05.082 [2024-12-09 10:23:49.607574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.082 [2024-12-09 10:23:49.667895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.646 Running I/O for 1 seconds... 00:13:06.580 1536.00 IOPS, 96.00 MiB/s 00:13:06.580 Latency(us) 00:13:06.580 [2024-12-09T09:23:51.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.580 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:06.580 Verification LBA range: start 0x0 length 0x400 00:13:06.580 Nvme0n1 : 1.01 1586.43 99.15 0.00 0.00 39696.67 6893.42 34758.35 00:13:06.580 [2024-12-09T09:23:51.234Z] =================================================================================================================== 00:13:06.580 [2024-12-09T09:23:51.234Z] Total : 1586.43 99.15 0.00 0.00 39696.67 6893.42 34758.35 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.839 rmmod nvme_tcp 00:13:06.839 rmmod nvme_fabrics 00:13:06.839 rmmod nvme_keyring 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1992205 ']' 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1992205 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1992205 ']' 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1992205 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1992205 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1992205' 00:13:06.839 killing process with pid 1992205 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1992205 00:13:06.839 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1992205 00:13:07.098 [2024-12-09 10:23:51.634739] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.098 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:09.629 00:13:09.629 real 0m10.408s 00:13:09.629 user 0m21.772s 00:13:09.629 sys 0m3.842s 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.629 ************************************ 00:13:09.629 END TEST nvmf_host_management 00:13:09.629 ************************************ 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.629 10:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:09.629 ************************************ 00:13:09.629 START TEST nvmf_lvol 00:13:09.629 ************************************ 00:13:09.630 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:09.630 * Looking for test storage... 00:13:09.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.630 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:09.630 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:13:09.630 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.630 --rc genhtml_branch_coverage=1 00:13:09.630 --rc genhtml_function_coverage=1 00:13:09.630 --rc genhtml_legend=1 00:13:09.630 --rc geninfo_all_blocks=1 00:13:09.630 --rc geninfo_unexecuted_blocks=1 00:13:09.630 00:13:09.630 ' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.630 --rc genhtml_branch_coverage=1 00:13:09.630 --rc genhtml_function_coverage=1 00:13:09.630 --rc genhtml_legend=1 00:13:09.630 --rc geninfo_all_blocks=1 00:13:09.630 --rc geninfo_unexecuted_blocks=1 00:13:09.630 00:13:09.630 ' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.630 --rc genhtml_branch_coverage=1 00:13:09.630 --rc genhtml_function_coverage=1 00:13:09.630 --rc genhtml_legend=1 00:13:09.630 --rc geninfo_all_blocks=1 00:13:09.630 --rc geninfo_unexecuted_blocks=1 00:13:09.630 00:13:09.630 ' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:09.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.630 --rc genhtml_branch_coverage=1 00:13:09.630 --rc genhtml_function_coverage=1 00:13:09.630 --rc genhtml_legend=1 00:13:09.630 --rc geninfo_all_blocks=1 00:13:09.630 --rc geninfo_unexecuted_blocks=1 00:13:09.630 00:13:09.630 ' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.630 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.631 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:12.953 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:12.953 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:12.953 Found net devices under 0000:84:00.0: cvl_0_0 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:12.953 Found net devices under 0000:84:00.1: cvl_0_1 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.953 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:13:12.954 00:13:12.954 --- 10.0.0.2 ping statistics --- 00:13:12.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.954 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:12.954 00:13:12.954 --- 10.0.0.1 ping statistics --- 00:13:12.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.954 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1994890 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1994890 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1994890 ']' 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.954 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:12.954 [2024-12-09 10:23:57.496330] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:12.954 [2024-12-09 10:23:57.496428] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.954 [2024-12-09 10:23:57.588680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.212 [2024-12-09 10:23:57.699679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.212 [2024-12-09 10:23:57.699821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.212 [2024-12-09 10:23:57.699860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.212 [2024-12-09 10:23:57.699889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.212 [2024-12-09 10:23:57.699914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.212 [2024-12-09 10:23:57.702954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.212 [2024-12-09 10:23:57.703081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.212 [2024-12-09 10:23:57.703091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.469 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.727 [2024-12-09 10:23:58.226403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.727 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.304 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:14.304 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.568 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:14.568 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:15.148 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:15.714 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=95cc7015-981d-466d-812a-9a642ba5e09e 00:13:15.714 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 95cc7015-981d-466d-812a-9a642ba5e09e lvol 20 00:13:15.973 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8508570b-60a7-48a2-99e8-c5103fb2cd01 00:13:15.973 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:16.539 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8508570b-60a7-48a2-99e8-c5103fb2cd01 00:13:17.106 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:17.364 [2024-12-09 10:24:01.894705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.364 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.933 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1995573 00:13:17.933 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:17.933 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:18.868 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8508570b-60a7-48a2-99e8-c5103fb2cd01 MY_SNAPSHOT 00:13:19.126 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6c3d5640-e21f-41ef-926b-b6dd4add31ac 00:13:19.127 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8508570b-60a7-48a2-99e8-c5103fb2cd01 30 00:13:19.715 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6c3d5640-e21f-41ef-926b-b6dd4add31ac MY_CLONE 00:13:20.283 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=be573626-e458-4b36-af09-8ab32a6a0001 00:13:20.283 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate be573626-e458-4b36-af09-8ab32a6a0001 00:13:21.218 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1995573 00:13:29.334 Initializing NVMe Controllers 00:13:29.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:29.334 Controller IO queue size 128, less than required. 00:13:29.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:29.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:29.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:29.334 Initialization complete. Launching workers. 00:13:29.334 ======================================================== 00:13:29.334 Latency(us) 00:13:29.334 Device Information : IOPS MiB/s Average min max 00:13:29.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10469.40 40.90 12234.25 2152.89 138555.64 00:13:29.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10355.30 40.45 12362.88 2263.41 71738.59 00:13:29.334 ======================================================== 00:13:29.334 Total : 20824.70 81.35 12298.21 2152.89 138555.64 00:13:29.334 00:13:29.334 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:29.334 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8508570b-60a7-48a2-99e8-c5103fb2cd01 00:13:29.334 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95cc7015-981d-466d-812a-9a642ba5e09e 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:29.902 rmmod nvme_tcp 00:13:29.902 rmmod nvme_fabrics 00:13:29.902 rmmod nvme_keyring 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1994890 ']' 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1994890 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1994890 ']' 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1994890 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994890 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994890' 00:13:29.902 killing process with pid 1994890 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1994890 00:13:29.902 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1994890 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.469 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.391 00:13:32.391 real 0m23.097s 00:13:32.391 user 1m15.866s 00:13:32.391 sys 0m7.093s 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:32.391 ************************************ 00:13:32.391 END TEST nvmf_lvol 00:13:32.391 ************************************ 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.391 10:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:32.391 ************************************ 00:13:32.391 START TEST nvmf_lvs_grow 00:13:32.391 ************************************ 00:13:32.391 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:32.650 * Looking for test storage... 00:13:32.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.650 --rc genhtml_branch_coverage=1 00:13:32.650 --rc genhtml_function_coverage=1 00:13:32.650 --rc genhtml_legend=1 00:13:32.650 --rc geninfo_all_blocks=1 00:13:32.650 --rc geninfo_unexecuted_blocks=1 00:13:32.650 00:13:32.650 ' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.650 --rc genhtml_branch_coverage=1 00:13:32.650 --rc genhtml_function_coverage=1 00:13:32.650 --rc genhtml_legend=1 00:13:32.650 --rc geninfo_all_blocks=1 00:13:32.650 --rc geninfo_unexecuted_blocks=1 00:13:32.650 00:13:32.650 ' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.650 --rc genhtml_branch_coverage=1 00:13:32.650 --rc genhtml_function_coverage=1 00:13:32.650 --rc genhtml_legend=1 00:13:32.650 --rc geninfo_all_blocks=1 00:13:32.650 --rc geninfo_unexecuted_blocks=1 00:13:32.650 00:13:32.650 ' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:32.650 --rc genhtml_branch_coverage=1 00:13:32.650 --rc genhtml_function_coverage=1 00:13:32.650 --rc genhtml_legend=1 00:13:32.650 --rc geninfo_all_blocks=1 00:13:32.650 --rc geninfo_unexecuted_blocks=1 00:13:32.650 00:13:32.650 ' 00:13:32.650 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.908 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:32.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:13:32.909 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.198 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:36.199 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:36.199 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:36.199 Found net devices under 0000:84:00.0: cvl_0_0 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:36.199 Found net devices under 0000:84:00.1: cvl_0_1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:13:36.199 00:13:36.199 --- 10.0.0.2 ping statistics --- 00:13:36.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.199 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:13:36.199 00:13:36.199 --- 10.0.0.1 ping statistics --- 00:13:36.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.199 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.199 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1999628 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1999628 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1999628 ']' 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.200 10:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.200 [2024-12-09 10:24:20.545152] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:36.200 [2024-12-09 10:24:20.545268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.200 [2024-12-09 10:24:20.696752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.200 [2024-12-09 10:24:20.812974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.200 [2024-12-09 10:24:20.813087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.200 [2024-12-09 10:24:20.813153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.200 [2024-12-09 10:24:20.813185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.200 [2024-12-09 10:24:20.813211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.200 [2024-12-09 10:24:20.814535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.458 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.025 [2024-12-09 10:24:21.424026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 ************************************ 00:13:37.025 START TEST lvs_grow_clean 00:13:37.025 ************************************ 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:37.025 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:37.283 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:37.283 10:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:38.218 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:38.218 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:38.218 10:24:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:38.785 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:38.785 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:38.785 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47f7017e-fd49-428b-b33d-0f5d35545bff lvol 150 00:13:39.044 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 00:13:39.044 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:39.044 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:39.611 [2024-12-09 10:24:24.220951] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:39.611 [2024-12-09 10:24:24.221142] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:39.611 true 00:13:39.611 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:39.611 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:40.175 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:40.175 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:40.740 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 00:13:41.305 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:41.562 [2024-12-09 10:24:26.131681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.562 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.129 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2000337 00:13:42.129 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:42.129 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.129 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2000337 /var/tmp/bdevperf.sock 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2000337 ']' 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.130 10:24:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:42.130 [2024-12-09 10:24:26.650060] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:13:42.130 [2024-12-09 10:24:26.650237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000337 ] 00:13:42.388 [2024-12-09 10:24:26.820478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.388 [2024-12-09 10:24:26.940953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.646 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.646 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:13:42.646 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:43.216 Nvme0n1 00:13:43.216 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:43.783 [ 00:13:43.783 { 00:13:43.783 "name": "Nvme0n1", 00:13:43.783 "aliases": [ 00:13:43.783 "0bd2876a-4c95-4f72-88e2-6b2c4fb84b76" 00:13:43.783 ], 00:13:43.783 "product_name": "NVMe disk", 00:13:43.783 "block_size": 4096, 00:13:43.783 "num_blocks": 38912, 00:13:43.783 "uuid": "0bd2876a-4c95-4f72-88e2-6b2c4fb84b76", 00:13:43.783 "numa_id": 1, 00:13:43.783 "assigned_rate_limits": { 00:13:43.783 "rw_ios_per_sec": 0, 00:13:43.783 "rw_mbytes_per_sec": 0, 00:13:43.783 "r_mbytes_per_sec": 0, 00:13:43.783 "w_mbytes_per_sec": 0 00:13:43.783 }, 00:13:43.783 "claimed": false, 00:13:43.783 "zoned": false, 00:13:43.783 "supported_io_types": { 00:13:43.783 "read": true, 00:13:43.783 "write": true, 00:13:43.783 "unmap": true, 00:13:43.783 "flush": true, 00:13:43.783 "reset": true, 00:13:43.783 "nvme_admin": true, 00:13:43.783 "nvme_io": true, 00:13:43.783 "nvme_io_md": false, 00:13:43.783 "write_zeroes": true, 00:13:43.783 "zcopy": false, 00:13:43.783 "get_zone_info": false, 00:13:43.783 "zone_management": false, 00:13:43.783 "zone_append": false, 00:13:43.783 "compare": true, 00:13:43.783 "compare_and_write": true, 00:13:43.783 "abort": true, 00:13:43.783 "seek_hole": false, 00:13:43.783 "seek_data": false, 00:13:43.783 "copy": true, 00:13:43.783 "nvme_iov_md": false 00:13:43.783 }, 00:13:43.783 "memory_domains": [ 00:13:43.783 { 00:13:43.783 "dma_device_id": "system", 00:13:43.783 "dma_device_type": 1 00:13:43.783 } 00:13:43.783 ], 00:13:43.783 "driver_specific": { 00:13:43.783 "nvme": [ 00:13:43.783 { 00:13:43.783 "trid": { 00:13:43.783 "trtype": "TCP", 00:13:43.783 "adrfam": "IPv4", 00:13:43.783 "traddr": "10.0.0.2", 00:13:43.783 "trsvcid": "4420", 00:13:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:43.783 }, 00:13:43.783 "ctrlr_data": { 00:13:43.783 "cntlid": 1, 00:13:43.783 "vendor_id": "0x8086", 00:13:43.783 "model_number": "SPDK bdev Controller", 00:13:43.783 "serial_number": "SPDK0", 00:13:43.783 "firmware_revision": "25.01", 00:13:43.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:43.783 "oacs": { 00:13:43.783 "security": 0, 00:13:43.783 "format": 0, 00:13:43.783 "firmware": 0, 00:13:43.783 "ns_manage": 0 00:13:43.783 }, 00:13:43.783 "multi_ctrlr": true, 00:13:43.783 "ana_reporting": false 00:13:43.783 }, 00:13:43.783 "vs": { 00:13:43.783 "nvme_version": "1.3" 00:13:43.783 }, 00:13:43.783 "ns_data": { 00:13:43.783 "id": 1, 00:13:43.783 "can_share": true 00:13:43.783 } 00:13:43.783 } 00:13:43.783 ], 00:13:43.783 "mp_policy": "active_passive" 00:13:43.783 } 00:13:43.783 } 00:13:43.783 ] 00:13:43.783 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2000475 00:13:43.783 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:43.783 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:43.783 Running I/O for 10 seconds... 00:13:44.720 Latency(us) 00:13:44.720 [2024-12-09T09:24:29.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.720 Nvme0n1 : 1.00 8256.00 32.25 0.00 0.00 0.00 0.00 0.00 00:13:44.720 [2024-12-09T09:24:29.374Z] =================================================================================================================== 00:13:44.720 [2024-12-09T09:24:29.374Z] Total : 8256.00 32.25 0.00 0.00 0.00 0.00 0.00 00:13:44.720 00:13:45.658 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:45.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.917 Nvme0n1 : 2.00 8192.00 32.00 0.00 0.00 0.00 0.00 0.00 00:13:45.917 [2024-12-09T09:24:30.571Z] =================================================================================================================== 00:13:45.917 [2024-12-09T09:24:30.571Z] Total : 8192.00 32.00 0.00 0.00 0.00 0.00 0.00 00:13:45.917 00:13:46.177 true 00:13:46.177 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:46.177 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:46.745 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:46.745 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:46.745 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2000475 00:13:46.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.745 Nvme0n1 : 3.00 7747.33 30.26 0.00 0.00 0.00 0.00 0.00 00:13:46.745 [2024-12-09T09:24:31.399Z] =================================================================================================================== 00:13:46.745 [2024-12-09T09:24:31.399Z] Total : 7747.33 30.26 0.00 0.00 0.00 0.00 0.00 00:13:46.745 00:13:48.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.132 Nvme0n1 : 4.00 7969.50 31.13 0.00 0.00 0.00 0.00 0.00 00:13:48.132 [2024-12-09T09:24:32.786Z] =================================================================================================================== 00:13:48.132 [2024-12-09T09:24:32.786Z] Total : 7969.50 31.13 0.00 0.00 0.00 0.00 0.00 00:13:48.132 00:13:49.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.066 Nvme0n1 : 5.00 7721.80 30.16 0.00 0.00 0.00 0.00 0.00 00:13:49.066 [2024-12-09T09:24:33.720Z] =================================================================================================================== 00:13:49.066 [2024-12-09T09:24:33.720Z] Total : 7721.80 30.16 0.00 0.00 0.00 0.00 0.00 00:13:49.066 00:13:50.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.001 Nvme0n1 : 6.00 7588.50 29.64 0.00 0.00 0.00 0.00 0.00 00:13:50.001 [2024-12-09T09:24:34.655Z] =================================================================================================================== 00:13:50.001 [2024-12-09T09:24:34.655Z] Total : 7588.50 29.64 0.00 0.00 0.00 0.00 0.00 00:13:50.001 00:13:50.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.937 Nvme0n1 : 7.00 7457.00 29.13 0.00 0.00 0.00 0.00 0.00 00:13:50.937 [2024-12-09T09:24:35.591Z] =================================================================================================================== 00:13:50.937 [2024-12-09T09:24:35.591Z] Total : 7457.00 29.13 0.00 0.00 0.00 0.00 0.00 00:13:50.937 00:13:51.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.873 Nvme0n1 : 8.00 7334.50 28.65 0.00 0.00 0.00 0.00 0.00 00:13:51.873 [2024-12-09T09:24:36.527Z] =================================================================================================================== 00:13:51.873 [2024-12-09T09:24:36.527Z] Total : 7334.50 28.65 0.00 0.00 0.00 0.00 0.00 00:13:51.873 00:13:52.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.808 Nvme0n1 : 9.00 7267.44 28.39 0.00 0.00 0.00 0.00 0.00 00:13:52.808 [2024-12-09T09:24:37.462Z] =================================================================================================================== 00:13:52.808 [2024-12-09T09:24:37.462Z] Total : 7267.44 28.39 0.00 0.00 0.00 0.00 0.00 00:13:52.808 00:13:53.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.745 Nvme0n1 : 10.00 7239.20 28.28 0.00 0.00 0.00 0.00 0.00 00:13:53.745 [2024-12-09T09:24:38.399Z] =================================================================================================================== 00:13:53.745 [2024-12-09T09:24:38.399Z] Total : 7239.20 28.28 0.00 0.00 0.00 0.00 0.00 00:13:53.745 00:13:53.745 00:13:53.745 Latency(us) 00:13:53.745 [2024-12-09T09:24:38.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.745 Nvme0n1 : 10.01 7242.55 28.29 0.00 0.00 17661.22 8155.59 39030.33 00:13:53.745 [2024-12-09T09:24:38.399Z] =================================================================================================================== 00:13:53.745 [2024-12-09T09:24:38.399Z] Total : 7242.55 28.29 0.00 0.00 17661.22 8155.59 39030.33 00:13:53.745 { 00:13:53.745 "results": [ 00:13:53.745 { 00:13:53.745 "job": "Nvme0n1", 00:13:53.745 "core_mask": "0x2", 00:13:53.745 "workload": "randwrite", 00:13:53.745 "status": "finished", 00:13:53.745 "queue_depth": 128, 00:13:53.745 "io_size": 4096, 00:13:53.745 "runtime": 10.013052, 00:13:53.745 "iops": 7242.547027619551, 00:13:53.745 "mibps": 28.29119932663887, 00:13:53.745 "io_failed": 0, 00:13:53.745 "io_timeout": 0, 00:13:53.745 "avg_latency_us": 17661.216994300426, 00:13:53.745 "min_latency_us": 8155.591111111111, 00:13:53.745 "max_latency_us": 39030.328888888886 00:13:53.745 } 00:13:53.745 ], 00:13:53.745 "core_count": 1 00:13:53.745 } 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2000337 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2000337 ']' 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2000337 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2000337 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2000337' 00:13:54.004 killing process with pid 2000337 00:13:54.004 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2000337 00:13:54.004 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.004 00:13:54.004 Latency(us) 00:13:54.004 [2024-12-09T09:24:38.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.005 [2024-12-09T09:24:38.659Z] =================================================================================================================== 00:13:54.005 [2024-12-09T09:24:38.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:54.005 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2000337 00:13:54.263 10:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.829 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:55.086 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:55.086 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:55.343 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:55.343 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:55.343 10:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:55.909 [2024-12-09 10:24:40.295857] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:55.909 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:56.505 request: 00:13:56.505 { 00:13:56.505 "uuid": "47f7017e-fd49-428b-b33d-0f5d35545bff", 00:13:56.505 "method": "bdev_lvol_get_lvstores", 00:13:56.505 "req_id": 1 00:13:56.505 } 00:13:56.505 Got JSON-RPC error response 00:13:56.505 response: 00:13:56.505 { 00:13:56.505 "code": -19, 00:13:56.505 "message": "No such device" 00:13:56.505 } 00:13:56.505 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:13:56.505 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.505 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.505 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.505 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.073 aio_bdev 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.073 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.010 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 -t 2000 00:13:58.577 [ 00:13:58.577 { 00:13:58.577 "name": "0bd2876a-4c95-4f72-88e2-6b2c4fb84b76", 00:13:58.577 "aliases": [ 00:13:58.577 "lvs/lvol" 00:13:58.577 ], 00:13:58.577 "product_name": "Logical Volume", 00:13:58.577 "block_size": 4096, 00:13:58.577 "num_blocks": 38912, 00:13:58.577 "uuid": "0bd2876a-4c95-4f72-88e2-6b2c4fb84b76", 00:13:58.577 "assigned_rate_limits": { 00:13:58.577 "rw_ios_per_sec": 0, 00:13:58.577 "rw_mbytes_per_sec": 0, 00:13:58.577 "r_mbytes_per_sec": 0, 00:13:58.577 "w_mbytes_per_sec": 0 00:13:58.577 }, 00:13:58.577 "claimed": false, 00:13:58.577 "zoned": false, 00:13:58.577 "supported_io_types": { 00:13:58.577 "read": true, 00:13:58.577 "write": true, 00:13:58.577 "unmap": true, 00:13:58.577 "flush": false, 00:13:58.577 "reset": true, 00:13:58.577 "nvme_admin": false, 00:13:58.577 "nvme_io": false, 00:13:58.577 "nvme_io_md": false, 00:13:58.577 "write_zeroes": true, 00:13:58.577 "zcopy": false, 00:13:58.577 "get_zone_info": false, 00:13:58.578 "zone_management": false, 00:13:58.578 "zone_append": false, 00:13:58.578 "compare": false, 00:13:58.578 "compare_and_write": false, 00:13:58.578 "abort": false, 00:13:58.578 "seek_hole": true, 00:13:58.578 "seek_data": true, 00:13:58.578 "copy": false, 00:13:58.578 "nvme_iov_md": false 00:13:58.578 }, 00:13:58.578 "driver_specific": { 00:13:58.578 "lvol": { 00:13:58.578 "lvol_store_uuid": "47f7017e-fd49-428b-b33d-0f5d35545bff", 00:13:58.578 "base_bdev": "aio_bdev", 00:13:58.578 "thin_provision": false, 00:13:58.578 "num_allocated_clusters": 38, 00:13:58.578 "snapshot": false, 00:13:58.578 "clone": false, 00:13:58.578 "esnap_clone": false 00:13:58.578 } 00:13:58.578 } 00:13:58.578 } 00:13:58.578 ] 00:13:58.578 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:13:58.578 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:58.578 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:58.837 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:58.837 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:13:58.837 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:59.777 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:59.777 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0bd2876a-4c95-4f72-88e2-6b2c4fb84b76 00:14:00.346 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47f7017e-fd49-428b-b33d-0f5d35545bff 00:14:00.916 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.851 00:14:01.851 real 0m24.669s 00:14:01.851 user 0m24.250s 00:14:01.851 sys 0m2.903s 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:01.851 ************************************ 00:14:01.851 END TEST lvs_grow_clean 00:14:01.851 ************************************ 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:01.851 ************************************ 00:14:01.851 START TEST lvs_grow_dirty 00:14:01.851 ************************************ 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.851 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.109 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:02.109 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:02.675 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:02.675 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.675 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:03.609 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:03.609 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:03.609 10:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e91a6f45-bc03-463f-90b7-00392df98e6b lvol 150 00:14:04.178 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c7a1e53b-1887-477e-b52d-74394805117f 00:14:04.178 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:04.179 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:04.747 [2024-12-09 10:24:49.277892] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:04.747 [2024-12-09 10:24:49.278077] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:04.747 true 00:14:04.747 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:04.747 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:05.313 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:05.313 10:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:05.573 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c7a1e53b-1887-477e-b52d-74394805117f 00:14:06.513 10:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:07.080 [2024-12-09 10:24:51.533626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.080 10:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2003309 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2003309 /var/tmp/bdevperf.sock 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2003309 ']' 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.648 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:07.648 [2024-12-09 10:24:52.272819] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:07.648 [2024-12-09 10:24:52.272927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003309 ] 00:14:07.907 [2024-12-09 10:24:52.405416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.907 [2024-12-09 10:24:52.522568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.166 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.166 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:14:08.166 10:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:09.104 Nvme0n1 00:14:09.104 10:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:09.670 [ 00:14:09.670 { 00:14:09.670 "name": "Nvme0n1", 00:14:09.670 "aliases": [ 00:14:09.670 "c7a1e53b-1887-477e-b52d-74394805117f" 00:14:09.670 ], 00:14:09.670 "product_name": "NVMe disk", 00:14:09.670 "block_size": 4096, 00:14:09.670 "num_blocks": 38912, 00:14:09.670 "uuid": "c7a1e53b-1887-477e-b52d-74394805117f", 00:14:09.670 "numa_id": 1, 00:14:09.670 "assigned_rate_limits": { 00:14:09.670 "rw_ios_per_sec": 0, 00:14:09.670 "rw_mbytes_per_sec": 0, 00:14:09.670 "r_mbytes_per_sec": 0, 00:14:09.670 "w_mbytes_per_sec": 0 00:14:09.670 }, 00:14:09.670 "claimed": false, 00:14:09.670 "zoned": false, 00:14:09.670 "supported_io_types": { 00:14:09.670 "read": true, 00:14:09.670 "write": true, 00:14:09.670 "unmap": true, 00:14:09.670 "flush": true, 00:14:09.670 "reset": true, 00:14:09.670 "nvme_admin": true, 00:14:09.670 "nvme_io": true, 00:14:09.670 "nvme_io_md": false, 00:14:09.670 "write_zeroes": true, 00:14:09.670 "zcopy": false, 00:14:09.670 "get_zone_info": false, 00:14:09.670 "zone_management": false, 00:14:09.670 "zone_append": false, 00:14:09.670 "compare": true, 00:14:09.670 "compare_and_write": true, 00:14:09.670 "abort": true, 00:14:09.670 "seek_hole": false, 00:14:09.670 "seek_data": false, 00:14:09.670 "copy": true, 00:14:09.670 "nvme_iov_md": false 00:14:09.670 }, 00:14:09.670 "memory_domains": [ 00:14:09.670 { 00:14:09.670 "dma_device_id": "system", 00:14:09.670 "dma_device_type": 1 00:14:09.670 } 00:14:09.670 ], 00:14:09.670 "driver_specific": { 00:14:09.670 "nvme": [ 00:14:09.670 { 00:14:09.670 "trid": { 00:14:09.670 "trtype": "TCP", 00:14:09.670 "adrfam": "IPv4", 00:14:09.670 "traddr": "10.0.0.2", 00:14:09.670 "trsvcid": "4420", 00:14:09.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:09.670 }, 00:14:09.670 "ctrlr_data": { 00:14:09.670 "cntlid": 1, 00:14:09.670 "vendor_id": "0x8086", 00:14:09.670 "model_number": "SPDK bdev Controller", 00:14:09.670 "serial_number": "SPDK0", 00:14:09.670 "firmware_revision": "25.01", 00:14:09.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.670 "oacs": { 00:14:09.670 "security": 0, 00:14:09.670 "format": 0, 00:14:09.670 "firmware": 0, 00:14:09.670 "ns_manage": 0 00:14:09.670 }, 00:14:09.670 "multi_ctrlr": true, 00:14:09.670 "ana_reporting": false 00:14:09.670 }, 00:14:09.670 "vs": { 00:14:09.670 "nvme_version": "1.3" 00:14:09.670 }, 00:14:09.670 "ns_data": { 00:14:09.670 "id": 1, 00:14:09.670 "can_share": true 00:14:09.670 } 00:14:09.670 } 00:14:09.670 ], 00:14:09.670 "mp_policy": "active_passive" 00:14:09.670 } 00:14:09.670 } 00:14:09.670 ] 00:14:09.670 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.670 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2003564 00:14:09.670 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:09.929 Running I/O for 10 seconds... 00:14:10.865 Latency(us) 00:14:10.865 [2024-12-09T09:24:55.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.865 Nvme0n1 : 1.00 6131.00 23.95 0.00 0.00 0.00 0.00 0.00 00:14:10.865 [2024-12-09T09:24:55.519Z] =================================================================================================================== 00:14:10.865 [2024-12-09T09:24:55.519Z] Total : 6131.00 23.95 0.00 0.00 0.00 0.00 0.00 00:14:10.865 00:14:11.802 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:11.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.802 Nvme0n1 : 2.00 6240.50 24.38 0.00 0.00 0.00 0.00 0.00 00:14:11.802 [2024-12-09T09:24:56.456Z] =================================================================================================================== 00:14:11.802 [2024-12-09T09:24:56.456Z] Total : 6240.50 24.38 0.00 0.00 0.00 0.00 0.00 00:14:11.802 00:14:12.061 true 00:14:12.319 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:12.319 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:12.578 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:12.578 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:12.578 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2003564 00:14:12.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.838 Nvme0n1 : 3.00 6319.33 24.68 0.00 0.00 0.00 0.00 0.00 00:14:12.838 [2024-12-09T09:24:57.492Z] =================================================================================================================== 00:14:12.838 [2024-12-09T09:24:57.492Z] Total : 6319.33 24.68 0.00 0.00 0.00 0.00 0.00 00:14:12.838 00:14:13.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.777 Nvme0n1 : 4.00 6358.75 24.84 0.00 0.00 0.00 0.00 0.00 00:14:13.777 [2024-12-09T09:24:58.431Z] =================================================================================================================== 00:14:13.777 [2024-12-09T09:24:58.431Z] Total : 6358.75 24.84 0.00 0.00 0.00 0.00 0.00 00:14:13.777 00:14:15.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.160 Nvme0n1 : 5.00 6363.80 24.86 0.00 0.00 0.00 0.00 0.00 00:14:15.160 [2024-12-09T09:24:59.814Z] =================================================================================================================== 00:14:15.160 [2024-12-09T09:24:59.814Z] Total : 6363.80 24.86 0.00 0.00 0.00 0.00 0.00 00:14:15.160 00:14:16.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.100 Nvme0n1 : 6.00 6509.67 25.43 0.00 0.00 0.00 0.00 0.00 00:14:16.100 [2024-12-09T09:25:00.754Z] =================================================================================================================== 00:14:16.100 [2024-12-09T09:25:00.754Z] Total : 6509.67 25.43 0.00 0.00 0.00 0.00 0.00 00:14:16.100 00:14:17.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.040 Nvme0n1 : 7.00 6577.57 25.69 0.00 0.00 0.00 0.00 0.00 00:14:17.040 [2024-12-09T09:25:01.694Z] =================================================================================================================== 00:14:17.040 [2024-12-09T09:25:01.694Z] Total : 6577.57 25.69 0.00 0.00 0.00 0.00 0.00 00:14:17.040 00:14:17.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.981 Nvme0n1 : 8.00 6565.00 25.64 0.00 0.00 0.00 0.00 0.00 00:14:17.981 [2024-12-09T09:25:02.635Z] =================================================================================================================== 00:14:17.981 [2024-12-09T09:25:02.635Z] Total : 6565.00 25.64 0.00 0.00 0.00 0.00 0.00 00:14:17.981 00:14:18.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.925 Nvme0n1 : 9.00 6569.33 25.66 0.00 0.00 0.00 0.00 0.00 00:14:18.925 [2024-12-09T09:25:03.579Z] =================================================================================================================== 00:14:18.925 [2024-12-09T09:25:03.579Z] Total : 6569.33 25.66 0.00 0.00 0.00 0.00 0.00 00:14:18.925 00:14:19.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.860 Nvme0n1 : 10.00 6560.10 25.63 0.00 0.00 0.00 0.00 0.00 00:14:19.860 [2024-12-09T09:25:04.514Z] =================================================================================================================== 00:14:19.860 [2024-12-09T09:25:04.514Z] Total : 6560.10 25.63 0.00 0.00 0.00 0.00 0.00 00:14:19.860 00:14:19.860 00:14:19.860 Latency(us) 00:14:19.860 [2024-12-09T09:25:04.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.860 Nvme0n1 : 10.01 6566.03 25.65 0.00 0.00 19480.63 8738.13 37671.06 00:14:19.860 [2024-12-09T09:25:04.514Z] =================================================================================================================== 00:14:19.860 [2024-12-09T09:25:04.514Z] Total : 6566.03 25.65 0.00 0.00 19480.63 8738.13 37671.06 00:14:19.860 { 00:14:19.860 "results": [ 00:14:19.860 { 00:14:19.860 "job": "Nvme0n1", 00:14:19.860 "core_mask": "0x2", 00:14:19.860 "workload": "randwrite", 00:14:19.860 "status": "finished", 00:14:19.860 "queue_depth": 128, 00:14:19.860 "io_size": 4096, 00:14:19.860 "runtime": 10.01046, 00:14:19.860 "iops": 6566.031930600592, 00:14:19.860 "mibps": 25.648562228908563, 00:14:19.860 "io_failed": 0, 00:14:19.860 "io_timeout": 0, 00:14:19.860 "avg_latency_us": 19480.626527126253, 00:14:19.860 "min_latency_us": 8738.133333333333, 00:14:19.860 "max_latency_us": 37671.0637037037 00:14:19.860 } 00:14:19.860 ], 00:14:19.860 "core_count": 1 00:14:19.860 } 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2003309 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2003309 ']' 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2003309 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.860 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2003309 00:14:20.118 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:20.118 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:20.118 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2003309' 00:14:20.118 killing process with pid 2003309 00:14:20.118 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2003309 00:14:20.118 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.118 00:14:20.118 Latency(us) 00:14:20.118 [2024-12-09T09:25:04.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.118 [2024-12-09T09:25:04.772Z] =================================================================================================================== 00:14:20.118 [2024-12-09T09:25:04.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.118 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2003309 00:14:20.375 10:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.941 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:21.200 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:21.200 10:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1999628 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1999628 00:14:21.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1999628 Killed "${NVMF_APP[@]}" "$@" 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2004910 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2004910 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2004910 ']' 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.769 10:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:22.027 [2024-12-09 10:25:06.467059] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:22.027 [2024-12-09 10:25:06.467232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.027 [2024-12-09 10:25:06.652515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.287 [2024-12-09 10:25:06.769313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.287 [2024-12-09 10:25:06.769429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.287 [2024-12-09 10:25:06.769466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.287 [2024-12-09 10:25:06.769496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.287 [2024-12-09 10:25:06.769522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.287 [2024-12-09 10:25:06.770553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.262 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.860 [2024-12-09 10:25:08.336654] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:23.860 [2024-12-09 10:25:08.337028] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:23.860 [2024-12-09 10:25:08.337172] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:23.860 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:23.860 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c7a1e53b-1887-477e-b52d-74394805117f 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c7a1e53b-1887-477e-b52d-74394805117f 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.861 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:24.121 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7a1e53b-1887-477e-b52d-74394805117f -t 2000 00:14:25.062 [ 00:14:25.062 { 00:14:25.062 "name": "c7a1e53b-1887-477e-b52d-74394805117f", 00:14:25.062 "aliases": [ 00:14:25.062 "lvs/lvol" 00:14:25.062 ], 00:14:25.062 "product_name": "Logical Volume", 00:14:25.062 "block_size": 4096, 00:14:25.062 "num_blocks": 38912, 00:14:25.062 "uuid": "c7a1e53b-1887-477e-b52d-74394805117f", 00:14:25.062 "assigned_rate_limits": { 00:14:25.062 "rw_ios_per_sec": 0, 00:14:25.062 "rw_mbytes_per_sec": 0, 00:14:25.062 "r_mbytes_per_sec": 0, 00:14:25.062 "w_mbytes_per_sec": 0 00:14:25.062 }, 00:14:25.062 "claimed": false, 00:14:25.062 "zoned": false, 00:14:25.062 "supported_io_types": { 00:14:25.062 "read": true, 00:14:25.062 "write": true, 00:14:25.062 "unmap": true, 00:14:25.062 "flush": false, 00:14:25.062 "reset": true, 00:14:25.062 "nvme_admin": false, 00:14:25.062 "nvme_io": false, 00:14:25.062 "nvme_io_md": false, 00:14:25.062 "write_zeroes": true, 00:14:25.062 "zcopy": false, 00:14:25.062 "get_zone_info": false, 00:14:25.062 "zone_management": false, 00:14:25.062 "zone_append": false, 00:14:25.062 "compare": false, 00:14:25.062 "compare_and_write": false, 00:14:25.062 "abort": false, 00:14:25.062 "seek_hole": true, 00:14:25.062 "seek_data": true, 00:14:25.062 "copy": false, 00:14:25.062 "nvme_iov_md": false 00:14:25.062 }, 00:14:25.062 "driver_specific": { 00:14:25.062 "lvol": { 00:14:25.062 "lvol_store_uuid": "e91a6f45-bc03-463f-90b7-00392df98e6b", 00:14:25.062 "base_bdev": "aio_bdev", 00:14:25.062 "thin_provision": false, 00:14:25.062 "num_allocated_clusters": 38, 00:14:25.062 "snapshot": false, 00:14:25.062 "clone": false, 00:14:25.062 "esnap_clone": false 00:14:25.062 } 00:14:25.062 } 00:14:25.062 } 00:14:25.062 ] 00:14:25.062 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:25.062 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:25.062 10:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:25.633 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:25.633 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:25.633 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:25.893 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:25.893 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.834 [2024-12-09 10:25:11.157763] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.834 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.835 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.835 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.835 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.835 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.835 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:27.404 request: 00:14:27.404 { 00:14:27.404 "uuid": "e91a6f45-bc03-463f-90b7-00392df98e6b", 00:14:27.404 "method": "bdev_lvol_get_lvstores", 00:14:27.404 "req_id": 1 00:14:27.404 } 00:14:27.404 Got JSON-RPC error response 00:14:27.404 response: 00:14:27.404 { 00:14:27.404 "code": -19, 00:14:27.404 "message": "No such device" 00:14:27.404 } 00:14:27.404 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:14:27.405 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.405 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.405 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.405 10:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:27.665 aio_bdev 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c7a1e53b-1887-477e-b52d-74394805117f 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c7a1e53b-1887-477e-b52d-74394805117f 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.924 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.183 10:25:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c7a1e53b-1887-477e-b52d-74394805117f -t 2000 00:14:28.750 [ 00:14:28.750 { 00:14:28.750 "name": "c7a1e53b-1887-477e-b52d-74394805117f", 00:14:28.750 "aliases": [ 00:14:28.750 "lvs/lvol" 00:14:28.750 ], 00:14:28.750 "product_name": "Logical Volume", 00:14:28.750 "block_size": 4096, 00:14:28.750 "num_blocks": 38912, 00:14:28.750 "uuid": "c7a1e53b-1887-477e-b52d-74394805117f", 00:14:28.750 "assigned_rate_limits": { 00:14:28.750 "rw_ios_per_sec": 0, 00:14:28.750 "rw_mbytes_per_sec": 0, 00:14:28.750 "r_mbytes_per_sec": 0, 00:14:28.750 "w_mbytes_per_sec": 0 00:14:28.750 }, 00:14:28.750 "claimed": false, 00:14:28.750 "zoned": false, 00:14:28.750 "supported_io_types": { 00:14:28.750 "read": true, 00:14:28.750 "write": true, 00:14:28.750 "unmap": true, 00:14:28.750 "flush": false, 00:14:28.750 "reset": true, 00:14:28.750 "nvme_admin": false, 00:14:28.750 "nvme_io": false, 00:14:28.750 "nvme_io_md": false, 00:14:28.750 "write_zeroes": true, 00:14:28.750 "zcopy": false, 00:14:28.750 "get_zone_info": false, 00:14:28.750 "zone_management": false, 00:14:28.750 "zone_append": false, 00:14:28.750 "compare": false, 00:14:28.750 "compare_and_write": false, 00:14:28.750 "abort": false, 00:14:28.750 "seek_hole": true, 00:14:28.750 "seek_data": true, 00:14:28.750 "copy": false, 00:14:28.750 "nvme_iov_md": false 00:14:28.750 }, 00:14:28.750 "driver_specific": { 00:14:28.750 "lvol": { 00:14:28.750 "lvol_store_uuid": "e91a6f45-bc03-463f-90b7-00392df98e6b", 00:14:28.750 "base_bdev": "aio_bdev", 00:14:28.750 "thin_provision": false, 00:14:28.750 "num_allocated_clusters": 38, 00:14:28.750 "snapshot": false, 00:14:28.750 "clone": false, 00:14:28.750 "esnap_clone": false 00:14:28.750 } 00:14:28.750 } 00:14:28.750 } 00:14:28.750 ] 00:14:28.750 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:14:28.750 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:28.750 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:29.009 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:29.009 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:29.009 10:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:29.946 10:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:29.946 10:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c7a1e53b-1887-477e-b52d-74394805117f 00:14:30.515 10:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e91a6f45-bc03-463f-90b7-00392df98e6b 00:14:31.082 10:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.647 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:31.906 00:14:31.906 real 0m30.065s 00:14:31.906 user 1m11.133s 00:14:31.906 sys 0m6.216s 00:14:31.906 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.906 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:31.906 ************************************ 00:14:31.906 END TEST lvs_grow_dirty 00:14:31.906 ************************************ 00:14:31.906 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:31.906 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:14:31.906 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:31.907 nvmf_trace.0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.907 rmmod nvme_tcp 00:14:31.907 rmmod nvme_fabrics 00:14:31.907 rmmod nvme_keyring 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2004910 ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2004910 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2004910 ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2004910 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004910 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004910' 00:14:31.907 killing process with pid 2004910 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2004910 00:14:31.907 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2004910 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.477 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:34.385 00:14:34.385 real 1m1.912s 00:14:34.385 user 1m47.305s 00:14:34.385 sys 0m12.290s 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:34.385 ************************************ 00:14:34.385 END TEST nvmf_lvs_grow 00:14:34.385 ************************************ 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.385 10:25:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:34.386 ************************************ 00:14:34.386 START TEST nvmf_bdev_io_wait 00:14:34.386 ************************************ 00:14:34.386 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:34.646 * Looking for test storage... 00:14:34.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:34.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.646 --rc genhtml_branch_coverage=1 00:14:34.646 --rc genhtml_function_coverage=1 00:14:34.646 --rc genhtml_legend=1 00:14:34.646 --rc geninfo_all_blocks=1 00:14:34.646 --rc geninfo_unexecuted_blocks=1 00:14:34.646 00:14:34.646 ' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:34.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.646 --rc genhtml_branch_coverage=1 00:14:34.646 --rc genhtml_function_coverage=1 00:14:34.646 --rc genhtml_legend=1 00:14:34.646 --rc geninfo_all_blocks=1 00:14:34.646 --rc geninfo_unexecuted_blocks=1 00:14:34.646 00:14:34.646 ' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:34.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.646 --rc genhtml_branch_coverage=1 00:14:34.646 --rc genhtml_function_coverage=1 00:14:34.646 --rc genhtml_legend=1 00:14:34.646 --rc geninfo_all_blocks=1 00:14:34.646 --rc geninfo_unexecuted_blocks=1 00:14:34.646 00:14:34.646 ' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:34.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.646 --rc genhtml_branch_coverage=1 00:14:34.646 --rc genhtml_function_coverage=1 00:14:34.646 --rc genhtml_legend=1 00:14:34.646 --rc geninfo_all_blocks=1 00:14:34.646 --rc geninfo_unexecuted_blocks=1 00:14:34.646 00:14:34.646 ' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:34.646 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.647 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:37.939 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:37.939 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.939 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:37.940 Found net devices under 0000:84:00.0: cvl_0_0 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:37.940 Found net devices under 0000:84:00.1: cvl_0_1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:37.940 00:14:37.940 --- 10.0.0.2 ping statistics --- 00:14:37.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.940 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:14:37.940 00:14:37.940 --- 10.0.0.1 ping statistics --- 00:14:37.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.940 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2008127 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2008127 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2008127 ']' 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.940 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:37.940 [2024-12-09 10:25:22.337747] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:37.940 [2024-12-09 10:25:22.337927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.940 [2024-12-09 10:25:22.513847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.199 [2024-12-09 10:25:22.636699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.199 [2024-12-09 10:25:22.636790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.199 [2024-12-09 10:25:22.636811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.199 [2024-12-09 10:25:22.636827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.199 [2024-12-09 10:25:22.636841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.199 [2024-12-09 10:25:22.638950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.199 [2024-12-09 10:25:22.639015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.199 [2024-12-09 10:25:22.639087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.199 [2024-12-09 10:25:22.639090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.199 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 [2024-12-09 10:25:22.885542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 Malloc0 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 [2024-12-09 10:25:22.938926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2008272 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2008274 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:38.459 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:38.460 { 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme$subsystem", 00:14:38.460 "trtype": "$TEST_TRANSPORT", 00:14:38.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "$NVMF_PORT", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.460 "hdgst": ${hdgst:-false}, 00:14:38.460 "ddgst": ${ddgst:-false} 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 } 00:14:38.460 EOF 00:14:38.460 )") 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2008276 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:38.460 { 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme$subsystem", 00:14:38.460 "trtype": "$TEST_TRANSPORT", 00:14:38.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "$NVMF_PORT", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.460 "hdgst": ${hdgst:-false}, 00:14:38.460 "ddgst": ${ddgst:-false} 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 } 00:14:38.460 EOF 00:14:38.460 )") 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2008279 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:38.460 { 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme$subsystem", 00:14:38.460 "trtype": "$TEST_TRANSPORT", 00:14:38.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "$NVMF_PORT", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.460 "hdgst": ${hdgst:-false}, 00:14:38.460 "ddgst": ${ddgst:-false} 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 } 00:14:38.460 EOF 00:14:38.460 )") 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:38.460 { 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme$subsystem", 00:14:38.460 "trtype": "$TEST_TRANSPORT", 00:14:38.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "$NVMF_PORT", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.460 "hdgst": ${hdgst:-false}, 00:14:38.460 "ddgst": ${ddgst:-false} 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 } 00:14:38.460 EOF 00:14:38.460 )") 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2008272 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme1", 00:14:38.460 "trtype": "tcp", 00:14:38.460 "traddr": "10.0.0.2", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "4420", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.460 "hdgst": false, 00:14:38.460 "ddgst": false 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 }' 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme1", 00:14:38.460 "trtype": "tcp", 00:14:38.460 "traddr": "10.0.0.2", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "4420", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.460 "hdgst": false, 00:14:38.460 "ddgst": false 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 }' 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme1", 00:14:38.460 "trtype": "tcp", 00:14:38.460 "traddr": "10.0.0.2", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "4420", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.460 "hdgst": false, 00:14:38.460 "ddgst": false 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 }' 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:14:38.460 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:38.460 "params": { 00:14:38.460 "name": "Nvme1", 00:14:38.460 "trtype": "tcp", 00:14:38.460 "traddr": "10.0.0.2", 00:14:38.460 "adrfam": "ipv4", 00:14:38.460 "trsvcid": "4420", 00:14:38.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.460 "hdgst": false, 00:14:38.460 "ddgst": false 00:14:38.460 }, 00:14:38.460 "method": "bdev_nvme_attach_controller" 00:14:38.460 }' 00:14:38.460 [2024-12-09 10:25:22.990657] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:38.460 [2024-12-09 10:25:22.990757] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:38.460 [2024-12-09 10:25:22.993534] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:38.460 [2024-12-09 10:25:22.993531] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:38.460 [2024-12-09 10:25:22.993532] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:38.460 [2024-12-09 10:25:22.993636] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:25:22.993638] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:25:22.993638] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:38.460 --proc-type=auto ] 00:14:38.460 --proc-type=auto ] 00:14:38.720 [2024-12-09 10:25:23.150190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.720 [2024-12-09 10:25:23.199800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:14:38.720 [2024-12-09 10:25:23.273766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.720 [2024-12-09 10:25:23.329085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:38.977 [2024-12-09 10:25:23.384354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.977 [2024-12-09 10:25:23.442152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:38.977 [2024-12-09 10:25:23.511395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.977 [2024-12-09 10:25:23.563970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:39.234 Running I/O for 1 seconds... 00:14:39.234 Running I/O for 1 seconds... 00:14:39.234 Running I/O for 1 seconds... 00:14:39.234 Running I/O for 1 seconds... 00:14:40.165 7055.00 IOPS, 27.56 MiB/s 00:14:40.165 Latency(us) 00:14:40.165 [2024-12-09T09:25:24.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.165 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:40.165 Nvme1n1 : 1.02 7071.55 27.62 0.00 0.00 18023.86 6893.42 32428.18 00:14:40.165 [2024-12-09T09:25:24.819Z] =================================================================================================================== 00:14:40.165 [2024-12-09T09:25:24.819Z] Total : 7071.55 27.62 0.00 0.00 18023.86 6893.42 32428.18 00:14:40.165 8763.00 IOPS, 34.23 MiB/s 00:14:40.165 Latency(us) 00:14:40.165 [2024-12-09T09:25:24.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.165 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:40.165 Nvme1n1 : 1.01 8802.67 34.39 0.00 0.00 14466.09 7718.68 22330.79 00:14:40.165 [2024-12-09T09:25:24.819Z] =================================================================================================================== 00:14:40.165 [2024-12-09T09:25:24.819Z] Total : 8802.67 34.39 0.00 0.00 14466.09 7718.68 22330.79 00:14:40.165 6717.00 IOPS, 26.24 MiB/s 00:14:40.165 Latency(us) 00:14:40.165 [2024-12-09T09:25:24.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.165 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:40.165 Nvme1n1 : 1.01 6816.96 26.63 0.00 0.00 18715.64 4587.52 41166.32 00:14:40.165 [2024-12-09T09:25:24.819Z] =================================================================================================================== 00:14:40.165 [2024-12-09T09:25:24.819Z] Total : 6816.96 26.63 0.00 0.00 18715.64 4587.52 41166.32 00:14:40.423 187160.00 IOPS, 731.09 MiB/s 00:14:40.423 Latency(us) 00:14:40.423 [2024-12-09T09:25:25.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.423 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:40.423 Nvme1n1 : 1.00 186778.01 729.60 0.00 0.00 681.47 373.19 2002.49 00:14:40.423 [2024-12-09T09:25:25.077Z] =================================================================================================================== 00:14:40.423 [2024-12-09T09:25:25.077Z] Total : 186778.01 729.60 0.00 0.00 681.47 373.19 2002.49 00:14:40.423 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2008274 00:14:40.423 10:25:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2008276 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2008279 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.423 rmmod nvme_tcp 00:14:40.423 rmmod nvme_fabrics 00:14:40.423 rmmod nvme_keyring 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2008127 ']' 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2008127 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2008127 ']' 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2008127 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.423 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008127 00:14:40.681 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.681 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.681 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008127' 00:14:40.681 killing process with pid 2008127 00:14:40.681 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2008127 00:14:40.681 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2008127 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.941 10:25:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.848 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.848 00:14:42.848 real 0m8.490s 00:14:42.848 user 0m17.378s 00:14:42.848 sys 0m4.266s 00:14:42.848 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.848 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:42.848 ************************************ 00:14:42.848 END TEST nvmf_bdev_io_wait 00:14:42.848 ************************************ 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:43.107 ************************************ 00:14:43.107 START TEST nvmf_queue_depth 00:14:43.107 ************************************ 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:43.107 * Looking for test storage... 00:14:43.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:14:43.107 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:14:43.367 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:43.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.368 --rc genhtml_branch_coverage=1 00:14:43.368 --rc genhtml_function_coverage=1 00:14:43.368 --rc genhtml_legend=1 00:14:43.368 --rc geninfo_all_blocks=1 00:14:43.368 --rc geninfo_unexecuted_blocks=1 00:14:43.368 00:14:43.368 ' 00:14:43.368 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.369 --rc genhtml_branch_coverage=1 00:14:43.369 --rc genhtml_function_coverage=1 00:14:43.369 --rc genhtml_legend=1 00:14:43.369 --rc geninfo_all_blocks=1 00:14:43.369 --rc geninfo_unexecuted_blocks=1 00:14:43.369 00:14:43.369 ' 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.369 --rc genhtml_branch_coverage=1 00:14:43.369 --rc genhtml_function_coverage=1 00:14:43.369 --rc genhtml_legend=1 00:14:43.369 --rc geninfo_all_blocks=1 00:14:43.369 --rc geninfo_unexecuted_blocks=1 00:14:43.369 00:14:43.369 ' 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.369 --rc genhtml_branch_coverage=1 00:14:43.369 --rc genhtml_function_coverage=1 00:14:43.369 --rc genhtml_legend=1 00:14:43.369 --rc geninfo_all_blocks=1 00:14:43.369 --rc geninfo_unexecuted_blocks=1 00:14:43.369 00:14:43.369 ' 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.369 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:14:43.370 10:25:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.664 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:46.665 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:46.665 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:46.665 Found net devices under 0000:84:00.0: cvl_0_0 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:46.665 Found net devices under 0000:84:00.1: cvl_0_1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:14:46.665 00:14:46.665 --- 10.0.0.2 ping statistics --- 00:14:46.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.665 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:46.665 00:14:46.665 --- 10.0.0.1 ping statistics --- 00:14:46.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.665 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2010654 00:14:46.665 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2010654 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2010654 ']' 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.665 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:46.665 [2024-12-09 10:25:31.122565] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:46.666 [2024-12-09 10:25:31.122764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.666 [2024-12-09 10:25:31.317296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.924 [2024-12-09 10:25:31.437626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.924 [2024-12-09 10:25:31.437747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.924 [2024-12-09 10:25:31.437790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.924 [2024-12-09 10:25:31.437820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.924 [2024-12-09 10:25:31.437848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.924 [2024-12-09 10:25:31.439210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.183 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 [2024-12-09 10:25:31.734299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 Malloc0 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.184 [2024-12-09 10:25:31.816838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2010685 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2010685 /var/tmp/bdevperf.sock 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2010685 ']' 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.184 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.442 [2024-12-09 10:25:31.914986] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:14:47.442 [2024-12-09 10:25:31.915147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010685 ] 00:14:47.442 [2024-12-09 10:25:32.064239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.701 [2024-12-09 10:25:32.186053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.959 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.959 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:14:47.959 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:47.959 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.959 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 NVMe0n1 00:14:48.219 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:48.219 Running I/O for 10 seconds... 00:14:50.529 3072.00 IOPS, 12.00 MiB/s [2024-12-09T09:25:36.116Z] 3440.50 IOPS, 13.44 MiB/s [2024-12-09T09:25:37.086Z] 3417.33 IOPS, 13.35 MiB/s [2024-12-09T09:25:38.027Z] 3552.00 IOPS, 13.88 MiB/s [2024-12-09T09:25:38.969Z] 3490.60 IOPS, 13.64 MiB/s [2024-12-09T09:25:39.911Z] 3577.50 IOPS, 13.97 MiB/s [2024-12-09T09:25:40.850Z] 3544.00 IOPS, 13.84 MiB/s [2024-12-09T09:25:41.789Z] 3582.50 IOPS, 13.99 MiB/s [2024-12-09T09:25:43.168Z] 3581.78 IOPS, 13.99 MiB/s [2024-12-09T09:25:43.168Z] 3583.50 IOPS, 14.00 MiB/s 00:14:58.514 Latency(us) 00:14:58.514 [2024-12-09T09:25:43.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:58.514 Verification LBA range: start 0x0 length 0x4000 00:14:58.514 NVMe0n1 : 10.20 3613.14 14.11 0.00 0.00 280880.70 50098.63 171655.77 00:14:58.514 [2024-12-09T09:25:43.168Z] =================================================================================================================== 00:14:58.514 [2024-12-09T09:25:43.168Z] Total : 3613.14 14.11 0.00 0.00 280880.70 50098.63 171655.77 00:14:58.514 { 00:14:58.514 "results": [ 00:14:58.514 { 00:14:58.514 "job": "NVMe0n1", 00:14:58.514 "core_mask": "0x1", 00:14:58.514 "workload": "verify", 00:14:58.514 "status": "finished", 00:14:58.514 "verify_range": { 00:14:58.514 "start": 0, 00:14:58.514 "length": 16384 00:14:58.514 }, 00:14:58.514 "queue_depth": 1024, 00:14:58.514 "io_size": 4096, 00:14:58.514 "runtime": 10.19944, 00:14:58.514 "iops": 3613.1395449161914, 00:14:58.514 "mibps": 14.113826347328873, 00:14:58.514 "io_failed": 0, 00:14:58.514 "io_timeout": 0, 00:14:58.514 "avg_latency_us": 280880.6973746437, 00:14:58.514 "min_latency_us": 50098.63111111111, 00:14:58.514 "max_latency_us": 171655.77481481482 00:14:58.514 } 00:14:58.514 ], 00:14:58.514 "core_count": 1 00:14:58.514 } 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2010685 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2010685 ']' 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2010685 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2010685 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2010685' 00:14:58.514 killing process with pid 2010685 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2010685 00:14:58.514 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.514 00:14:58.514 Latency(us) 00:14:58.514 [2024-12-09T09:25:43.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.514 [2024-12-09T09:25:43.168Z] =================================================================================================================== 00:14:58.514 [2024-12-09T09:25:43.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:58.514 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2010685 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.773 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.773 rmmod nvme_tcp 00:14:58.773 rmmod nvme_fabrics 00:14:58.773 rmmod nvme_keyring 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2010654 ']' 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2010654 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2010654 ']' 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2010654 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2010654 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2010654' 00:14:59.032 killing process with pid 2010654 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2010654 00:14:59.032 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2010654 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.291 10:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.824 00:15:01.824 real 0m18.399s 00:15:01.824 user 0m24.881s 00:15:01.824 sys 0m4.539s 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:01.824 ************************************ 00:15:01.824 END TEST nvmf_queue_depth 00:15:01.824 ************************************ 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.824 10:25:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:01.824 ************************************ 00:15:01.824 START TEST nvmf_target_multipath 00:15:01.824 ************************************ 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:01.824 * Looking for test storage... 00:15:01.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.824 --rc genhtml_branch_coverage=1 00:15:01.824 --rc genhtml_function_coverage=1 00:15:01.824 --rc genhtml_legend=1 00:15:01.824 --rc geninfo_all_blocks=1 00:15:01.824 --rc geninfo_unexecuted_blocks=1 00:15:01.824 00:15:01.824 ' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.824 --rc genhtml_branch_coverage=1 00:15:01.824 --rc genhtml_function_coverage=1 00:15:01.824 --rc genhtml_legend=1 00:15:01.824 --rc geninfo_all_blocks=1 00:15:01.824 --rc geninfo_unexecuted_blocks=1 00:15:01.824 00:15:01.824 ' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.824 --rc genhtml_branch_coverage=1 00:15:01.824 --rc genhtml_function_coverage=1 00:15:01.824 --rc genhtml_legend=1 00:15:01.824 --rc geninfo_all_blocks=1 00:15:01.824 --rc geninfo_unexecuted_blocks=1 00:15:01.824 00:15:01.824 ' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.824 --rc genhtml_branch_coverage=1 00:15:01.824 --rc genhtml_function_coverage=1 00:15:01.824 --rc genhtml_legend=1 00:15:01.824 --rc geninfo_all_blocks=1 00:15:01.824 --rc geninfo_unexecuted_blocks=1 00:15:01.824 00:15:01.824 ' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.824 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.825 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:05.117 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.117 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:05.118 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:05.118 Found net devices under 0000:84:00.0: cvl_0_0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:05.118 Found net devices under 0000:84:00.1: cvl_0_1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:05.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:15:05.118 00:15:05.118 --- 10.0.0.2 ping statistics --- 00:15:05.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.118 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:15:05.118 00:15:05.118 --- 10.0.0.1 ping statistics --- 00:15:05.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.118 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:05.118 only one NIC for nvmf test 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:05.118 rmmod nvme_tcp 00:15:05.118 rmmod nvme_fabrics 00:15:05.118 rmmod nvme_keyring 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.118 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:07.023 00:15:07.023 real 0m5.436s 00:15:07.023 user 0m1.063s 00:15:07.023 sys 0m2.379s 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:07.023 ************************************ 00:15:07.023 END TEST nvmf_target_multipath 00:15:07.023 ************************************ 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:07.023 ************************************ 00:15:07.023 START TEST nvmf_zcopy 00:15:07.023 ************************************ 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:07.023 * Looking for test storage... 00:15:07.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:15:07.023 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.282 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.283 --rc genhtml_branch_coverage=1 00:15:07.283 --rc genhtml_function_coverage=1 00:15:07.283 --rc genhtml_legend=1 00:15:07.283 --rc geninfo_all_blocks=1 00:15:07.283 --rc geninfo_unexecuted_blocks=1 00:15:07.283 00:15:07.283 ' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.283 --rc genhtml_branch_coverage=1 00:15:07.283 --rc genhtml_function_coverage=1 00:15:07.283 --rc genhtml_legend=1 00:15:07.283 --rc geninfo_all_blocks=1 00:15:07.283 --rc geninfo_unexecuted_blocks=1 00:15:07.283 00:15:07.283 ' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.283 --rc genhtml_branch_coverage=1 00:15:07.283 --rc genhtml_function_coverage=1 00:15:07.283 --rc genhtml_legend=1 00:15:07.283 --rc geninfo_all_blocks=1 00:15:07.283 --rc geninfo_unexecuted_blocks=1 00:15:07.283 00:15:07.283 ' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.283 --rc genhtml_branch_coverage=1 00:15:07.283 --rc genhtml_function_coverage=1 00:15:07.283 --rc genhtml_legend=1 00:15:07.283 --rc geninfo_all_blocks=1 00:15:07.283 --rc geninfo_unexecuted_blocks=1 00:15:07.283 00:15:07.283 ' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:15:07.283 10:25:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:10.573 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:10.573 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:10.573 Found net devices under 0000:84:00.0: cvl_0_0 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:10.573 Found net devices under 0000:84:00.1: cvl_0_1 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.573 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:15:10.574 00:15:10.574 --- 10.0.0.2 ping statistics --- 00:15:10.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.574 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:15:10.574 00:15:10.574 --- 10.0.0.1 ping statistics --- 00:15:10.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.574 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2016168 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2016168 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2016168 ']' 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.574 10:25:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 [2024-12-09 10:25:55.071448] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:15:10.574 [2024-12-09 10:25:55.071628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.831 [2024-12-09 10:25:55.255440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.831 [2024-12-09 10:25:55.375066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.831 [2024-12-09 10:25:55.375167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.831 [2024-12-09 10:25:55.375205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.831 [2024-12-09 10:25:55.375245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.831 [2024-12-09 10:25:55.375271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.831 [2024-12-09 10:25:55.376608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 [2024-12-09 10:25:55.678751] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 [2024-12-09 10:25:55.695225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.089 malloc0 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.089 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:11.348 { 00:15:11.348 "params": { 00:15:11.348 "name": "Nvme$subsystem", 00:15:11.348 "trtype": "$TEST_TRANSPORT", 00:15:11.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.348 "adrfam": "ipv4", 00:15:11.348 "trsvcid": "$NVMF_PORT", 00:15:11.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.348 "hdgst": ${hdgst:-false}, 00:15:11.348 "ddgst": ${ddgst:-false} 00:15:11.348 }, 00:15:11.348 "method": "bdev_nvme_attach_controller" 00:15:11.348 } 00:15:11.348 EOF 00:15:11.348 )") 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:15:11.348 10:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:11.348 "params": { 00:15:11.348 "name": "Nvme1", 00:15:11.348 "trtype": "tcp", 00:15:11.348 "traddr": "10.0.0.2", 00:15:11.348 "adrfam": "ipv4", 00:15:11.348 "trsvcid": "4420", 00:15:11.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.348 "hdgst": false, 00:15:11.348 "ddgst": false 00:15:11.348 }, 00:15:11.348 "method": "bdev_nvme_attach_controller" 00:15:11.348 }' 00:15:11.348 [2024-12-09 10:25:55.843465] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:15:11.348 [2024-12-09 10:25:55.843637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016315 ] 00:15:11.606 [2024-12-09 10:25:56.012169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.606 [2024-12-09 10:25:56.131999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.863 Running I/O for 10 seconds... 00:15:13.806 2359.00 IOPS, 18.43 MiB/s [2024-12-09T09:25:59.839Z] 2486.50 IOPS, 19.43 MiB/s [2024-12-09T09:26:00.777Z] 2493.67 IOPS, 19.48 MiB/s [2024-12-09T09:26:01.716Z] 2756.25 IOPS, 21.53 MiB/s [2024-12-09T09:26:02.652Z] 2895.20 IOPS, 22.62 MiB/s [2024-12-09T09:26:03.592Z] 2824.33 IOPS, 22.07 MiB/s [2024-12-09T09:26:04.533Z] 2778.71 IOPS, 21.71 MiB/s [2024-12-09T09:26:05.911Z] 2875.62 IOPS, 22.47 MiB/s [2024-12-09T09:26:06.540Z] 2861.22 IOPS, 22.35 MiB/s [2024-12-09T09:26:06.799Z] 2893.80 IOPS, 22.61 MiB/s 00:15:22.145 Latency(us) 00:15:22.145 [2024-12-09T09:26:06.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.145 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:22.145 Verification LBA range: start 0x0 length 0x1000 00:15:22.145 Nvme1n1 : 10.08 2883.08 22.52 0.00 0.00 44061.34 5679.79 70681.79 00:15:22.145 [2024-12-09T09:26:06.799Z] =================================================================================================================== 00:15:22.145 [2024-12-09T09:26:06.799Z] Total : 2883.08 22.52 0.00 0.00 44061.34 5679.79 70681.79 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2017516 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:22.405 { 00:15:22.405 "params": { 00:15:22.405 "name": "Nvme$subsystem", 00:15:22.405 "trtype": "$TEST_TRANSPORT", 00:15:22.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:22.405 "adrfam": "ipv4", 00:15:22.405 "trsvcid": "$NVMF_PORT", 00:15:22.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:22.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:22.405 "hdgst": ${hdgst:-false}, 00:15:22.405 "ddgst": ${ddgst:-false} 00:15:22.405 }, 00:15:22.405 "method": "bdev_nvme_attach_controller" 00:15:22.405 } 00:15:22.405 EOF 00:15:22.405 )") 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:15:22.405 [2024-12-09 10:26:06.868678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.868733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:15:22.405 10:26:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:22.405 "params": { 00:15:22.405 "name": "Nvme1", 00:15:22.405 "trtype": "tcp", 00:15:22.405 "traddr": "10.0.0.2", 00:15:22.405 "adrfam": "ipv4", 00:15:22.405 "trsvcid": "4420", 00:15:22.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.405 "hdgst": false, 00:15:22.405 "ddgst": false 00:15:22.405 }, 00:15:22.405 "method": "bdev_nvme_attach_controller" 00:15:22.405 }' 00:15:22.405 [2024-12-09 10:26:06.876630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.876658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.884647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.884672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.892668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.892693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.900690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.900715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.908710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.908742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.913600] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:15:22.405 [2024-12-09 10:26:06.913695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017516 ] 00:15:22.405 [2024-12-09 10:26:06.916739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.916764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.924760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.924784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.932780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.932804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.940803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.940828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.948817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.948841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.956839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.956863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.964862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.964886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.972883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.972908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.980905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.980929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.988926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.988950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:06.996947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:06.996971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.002768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.405 [2024-12-09 10:26:07.004969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.004993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.013034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.013071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.021048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.021085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.029035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.029060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.037058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.037083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.045078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.045102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.405 [2024-12-09 10:26:07.053098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.405 [2024-12-09 10:26:07.053133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.061130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.061158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.069147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.069174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.070842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.672 [2024-12-09 10:26:07.077170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.077195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.089333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.089394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.101377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.101442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.113408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.113471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.125453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.125519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.137493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.137562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.149518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.149586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.161551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.161616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.173582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.173646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.185603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.185657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.197653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.197716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.209691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.209789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.221744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.221816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.233774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.233835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.245798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.245852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.257838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.257893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.270057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.270127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.282115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.282179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.290211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.290272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.302328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.302392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.672 [2024-12-09 10:26:07.314375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.672 [2024-12-09 10:26:07.314440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.322450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.322538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.334453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.334520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.346462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.346525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.358493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.358555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.370536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.370593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.382571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.382627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.394628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.394693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.406650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.406707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.418691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.418764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.430744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.430799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.442788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.442845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.454829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.454899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.466870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.466936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 Running I/O for 5 seconds... 00:15:22.958 [2024-12-09 10:26:07.492952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.492991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.517578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.517649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.542864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.542932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.566950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.566982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.958 [2024-12-09 10:26:07.590746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.958 [2024-12-09 10:26:07.590814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.616348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.616428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.642659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.642745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.666607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.666677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.690409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.690484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.714271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.714340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.739122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.739191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.763577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.763647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.787096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.787127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.811272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.811341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.834942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.835011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.247 [2024-12-09 10:26:07.859204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.247 [2024-12-09 10:26:07.859272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:07.885771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:07.885803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:07.910943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:07.911013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:07.935984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:07.936015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:07.961139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:07.961228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:07.985652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:07.985739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.009802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.009871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.034282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.034351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.058645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.058715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.082483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.082515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.100629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.100698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.123392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.123463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.508 [2024-12-09 10:26:08.148996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.508 [2024-12-09 10:26:08.149064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.175385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.175454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.195477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.195546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.221012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.221080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.247099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.247168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.273303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.273374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.299873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.299943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.325943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.326012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.351235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.351302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.377324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.377394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.767 [2024-12-09 10:26:08.403927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.767 [2024-12-09 10:26:08.403997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.424924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.424968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.446396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.446476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.470600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.470669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 5146.00 IOPS, 40.20 MiB/s [2024-12-09T09:26:08.679Z] [2024-12-09 10:26:08.496803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.496877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.520949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.520991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.544837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.544905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.025 [2024-12-09 10:26:08.569817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.025 [2024-12-09 10:26:08.569848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.026 [2024-12-09 10:26:08.594466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.026 [2024-12-09 10:26:08.594535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.026 [2024-12-09 10:26:08.618664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.026 [2024-12-09 10:26:08.618749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.026 [2024-12-09 10:26:08.642004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.026 [2024-12-09 10:26:08.642073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.026 [2024-12-09 10:26:08.666941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.026 [2024-12-09 10:26:08.667009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.694048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.694118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.720470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.720539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.745077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.745146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.771052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.771120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.797007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.797075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.823142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.823209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.849134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.849202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.876584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.876655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.902456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.902524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.284 [2024-12-09 10:26:08.924309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.284 [2024-12-09 10:26:08.924377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:08.951584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:08.951654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:08.977241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:08.977308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.003579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.003647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.030356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.030425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.057016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.057083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.083177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.083245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.109396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.109464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.135784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.135852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.162969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.162999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.542 [2024-12-09 10:26:09.189630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.542 [2024-12-09 10:26:09.189699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.215707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.215791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.241642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.241712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.266880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.266949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.293529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.293597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.318874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.318945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.345168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.345237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.371375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.371442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.398146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.398216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.423517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.423586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.800 [2024-12-09 10:26:09.446404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.800 [2024-12-09 10:26:09.446472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.471211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.471280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 5050.00 IOPS, 39.45 MiB/s [2024-12-09T09:26:09.713Z] [2024-12-09 10:26:09.495455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.495523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.518441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.518508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.542313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.542384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.561531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.561600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.586476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.586543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.611063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.611132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.635583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.635652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.659084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.659152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.684377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.684445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.059 [2024-12-09 10:26:09.710294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.059 [2024-12-09 10:26:09.710362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.736872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.736941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.763065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.763134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.788195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.788264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.813288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.813356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.838823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.838891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.864869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.864938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.890811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.890879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.917285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.917352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.940738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.940808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.319 [2024-12-09 10:26:09.966885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.319 [2024-12-09 10:26:09.966953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:09.993338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:09.993409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.014873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.014904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.039929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.039997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.061132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.061203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.086170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.086239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.112363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.112431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.137211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.137279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.161238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.161308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.185050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.185121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.204637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.204706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.578 [2024-12-09 10:26:10.227091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.578 [2024-12-09 10:26:10.227161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.251873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.251942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.276088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.276156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.300174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.300263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.325064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.325140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.349988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.350019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.374232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.374300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.399285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.399353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.423810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.423883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.448639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.448707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.837 [2024-12-09 10:26:10.473021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.837 [2024-12-09 10:26:10.473089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 5089.33 IOPS, 39.76 MiB/s [2024-12-09T09:26:10.749Z] [2024-12-09 10:26:10.497965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.498033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.522748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.522800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.546821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.546890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.570791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.570861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.595082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.595150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.620357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.620425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.095 [2024-12-09 10:26:10.644697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.095 [2024-12-09 10:26:10.644791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.096 [2024-12-09 10:26:10.668956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.096 [2024-12-09 10:26:10.669035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.096 [2024-12-09 10:26:10.694531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.096 [2024-12-09 10:26:10.694600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.096 [2024-12-09 10:26:10.721348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.096 [2024-12-09 10:26:10.721416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.096 [2024-12-09 10:26:10.748254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.096 [2024-12-09 10:26:10.748340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.769030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.769116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.792762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.792793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.814860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.814929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.838342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.838409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.862886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.862917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.886745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.886801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.910915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.910945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.936182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.936250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.957529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.957598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:10.980114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:10.980182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.354 [2024-12-09 10:26:11.004042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.354 [2024-12-09 10:26:11.004109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.024196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.024265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.048398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.048467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.072605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.072673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.095183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.095251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.119827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.119896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.144506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.144574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.168049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.168116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.191981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.192049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.215953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.216040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.240151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.240220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.613 [2024-12-09 10:26:11.265618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.613 [2024-12-09 10:26:11.265687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.288578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.288649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.311954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.311985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.335994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.336043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.361253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.361323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.385496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.385564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.409456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.409524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.431483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.431551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.456271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.456340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 [2024-12-09 10:26:11.479420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.479488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.871 5140.50 IOPS, 40.16 MiB/s [2024-12-09T09:26:11.525Z] [2024-12-09 10:26:11.503202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.871 [2024-12-09 10:26:11.503270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.528792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.528825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.553189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.553258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.578434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.578502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.602521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.602589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.627257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.627325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.651924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.651991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.676112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.676178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.701678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.701768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.727608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.727675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.754663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.754748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.130 [2024-12-09 10:26:11.780009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.130 [2024-12-09 10:26:11.780077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.806037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.806108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.831465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.831535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.857595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.857664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.883220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.883289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.909867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.909936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.935628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.935697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.961661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.961749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:11.984056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:11.984126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:12.009975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:12.010043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.389 [2024-12-09 10:26:12.030195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.389 [2024-12-09 10:26:12.030264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.055081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.055150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.080654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.080742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.106078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.106147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.133116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.133185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.157779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.157849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.183567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.183636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.208208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.208277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.234929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.234997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.259024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.259093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.648 [2024-12-09 10:26:12.284195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.648 [2024-12-09 10:26:12.284263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.310866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.310936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.336396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.336465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.363657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.363742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.384206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.384276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.405760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.405829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.431630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.431700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.456944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.457012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.482312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.482382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 5120.20 IOPS, 40.00 MiB/s [2024-12-09T09:26:12.561Z] [2024-12-09 10:26:12.503560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.503629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 00:15:27.907 Latency(us) 00:15:27.907 [2024-12-09T09:26:12.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.907 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:27.907 Nvme1n1 : 5.02 5121.04 40.01 0.00 0.00 24930.25 6844.87 41166.32 00:15:27.907 [2024-12-09T09:26:12.561Z] =================================================================================================================== 00:15:27.907 [2024-12-09T09:26:12.561Z] Total : 5121.04 40.01 0.00 0.00 24930.25 6844.87 41166.32 00:15:27.907 [2024-12-09 10:26:12.512749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.512827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.524795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.524858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.536817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.536874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.544757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.544796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.907 [2024-12-09 10:26:12.552799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.907 [2024-12-09 10:26:12.552848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.908 [2024-12-09 10:26:12.560849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.908 [2024-12-09 10:26:12.560901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.166 [2024-12-09 10:26:12.568856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.568904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.576854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.576902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.584864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.584910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.592887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.592934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.600916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.600965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.608933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.608980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.616961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.617008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.624987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.625036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.632997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.633042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.645075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.645129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.653067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.653113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.661082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.661129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.669112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.669160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.677136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.677196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.689228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.689284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.701258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.701310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.713298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.713351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.725334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.725388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.737370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.737422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.749433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.749496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.757374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.757423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.769539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.769624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.781505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.781559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.793537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.793590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.805569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.805622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.167 [2024-12-09 10:26:12.817617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.167 [2024-12-09 10:26:12.817675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.425 [2024-12-09 10:26:12.829653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.425 [2024-12-09 10:26:12.829711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2017516) - No such process 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2017516 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:28.426 delay0 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.426 10:26:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:28.426 [2024-12-09 10:26:13.038528] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:34.982 Initializing NVMe Controllers 00:15:34.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.982 Initialization complete. Launching workers. 00:15:34.982 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 54 00:15:34.982 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 341, failed to submit 33 00:15:34.982 success 152, unsuccessful 189, failed 0 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.982 rmmod nvme_tcp 00:15:34.982 rmmod nvme_fabrics 00:15:34.982 rmmod nvme_keyring 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2016168 ']' 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2016168 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2016168 ']' 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2016168 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016168 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016168' 00:15:34.982 killing process with pid 2016168 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2016168 00:15:34.982 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2016168 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.240 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.241 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:35.241 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.241 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.241 10:26:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:37.776 00:15:37.776 real 0m30.409s 00:15:37.776 user 0m43.007s 00:15:37.776 sys 0m10.102s 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:37.776 ************************************ 00:15:37.776 END TEST nvmf_zcopy 00:15:37.776 ************************************ 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:37.776 ************************************ 00:15:37.776 START TEST nvmf_nmic 00:15:37.776 ************************************ 00:15:37.776 10:26:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:37.776 * Looking for test storage... 00:15:37.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:37.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.776 --rc genhtml_branch_coverage=1 00:15:37.776 --rc genhtml_function_coverage=1 00:15:37.776 --rc genhtml_legend=1 00:15:37.776 --rc geninfo_all_blocks=1 00:15:37.776 --rc geninfo_unexecuted_blocks=1 00:15:37.776 00:15:37.776 ' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:37.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.776 --rc genhtml_branch_coverage=1 00:15:37.776 --rc genhtml_function_coverage=1 00:15:37.776 --rc genhtml_legend=1 00:15:37.776 --rc geninfo_all_blocks=1 00:15:37.776 --rc geninfo_unexecuted_blocks=1 00:15:37.776 00:15:37.776 ' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:37.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.776 --rc genhtml_branch_coverage=1 00:15:37.776 --rc genhtml_function_coverage=1 00:15:37.776 --rc genhtml_legend=1 00:15:37.776 --rc geninfo_all_blocks=1 00:15:37.776 --rc geninfo_unexecuted_blocks=1 00:15:37.776 00:15:37.776 ' 00:15:37.776 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:37.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.776 --rc genhtml_branch_coverage=1 00:15:37.776 --rc genhtml_function_coverage=1 00:15:37.776 --rc genhtml_legend=1 00:15:37.776 --rc geninfo_all_blocks=1 00:15:37.776 --rc geninfo_unexecuted_blocks=1 00:15:37.777 00:15:37.777 ' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:15:37.777 10:26:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:41.070 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:41.070 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:41.070 Found net devices under 0000:84:00.0: cvl_0_0 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.070 10:26:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:41.070 Found net devices under 0000:84:00.1: cvl_0_1 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:41.070 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:41.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:15:41.071 00:15:41.071 --- 10.0.0.2 ping statistics --- 00:15:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.071 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:41.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:15:41.071 00:15:41.071 --- 10.0.0.1 ping statistics --- 00:15:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.071 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2021048 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2021048 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2021048 ']' 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.071 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.071 [2024-12-09 10:26:25.307227] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:15:41.071 [2024-12-09 10:26:25.307409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.071 [2024-12-09 10:26:25.486352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.071 [2024-12-09 10:26:25.612090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.071 [2024-12-09 10:26:25.612193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.071 [2024-12-09 10:26:25.612230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.071 [2024-12-09 10:26:25.612268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.071 [2024-12-09 10:26:25.612280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.071 [2024-12-09 10:26:25.615379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.071 [2024-12-09 10:26:25.615537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.071 [2024-12-09 10:26:25.615540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.071 [2024-12-09 10:26:25.615442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.328 [2024-12-09 10:26:25.783340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.328 Malloc0 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.328 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 [2024-12-09 10:26:25.844650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:41.329 test case1: single bdev can't be used in multiple subsystems 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 [2024-12-09 10:26:25.868416] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:41.329 [2024-12-09 10:26:25.868450] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:41.329 [2024-12-09 10:26:25.868467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.329 request: 00:15:41.329 { 00:15:41.329 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:41.329 "namespace": { 00:15:41.329 "bdev_name": "Malloc0", 00:15:41.329 "no_auto_visible": false, 00:15:41.329 "hide_metadata": false 00:15:41.329 }, 00:15:41.329 "method": "nvmf_subsystem_add_ns", 00:15:41.329 "req_id": 1 00:15:41.329 } 00:15:41.329 Got JSON-RPC error response 00:15:41.329 response: 00:15:41.329 { 00:15:41.329 "code": -32602, 00:15:41.329 "message": "Invalid parameters" 00:15:41.329 } 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:41.329 Adding namespace failed - expected result. 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:41.329 test case2: host connect to nvmf target in multiple paths 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:41.329 [2024-12-09 10:26:25.876542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.329 10:26:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:41.895 10:26:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:42.462 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.462 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:15:42.462 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.462 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:42.462 10:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:15:45.000 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:45.000 [global] 00:15:45.000 thread=1 00:15:45.000 invalidate=1 00:15:45.000 rw=write 00:15:45.000 time_based=1 00:15:45.000 runtime=1 00:15:45.000 ioengine=libaio 00:15:45.000 direct=1 00:15:45.000 bs=4096 00:15:45.000 iodepth=1 00:15:45.000 norandommap=0 00:15:45.000 numjobs=1 00:15:45.000 00:15:45.000 verify_dump=1 00:15:45.000 verify_backlog=512 00:15:45.000 verify_state_save=0 00:15:45.000 do_verify=1 00:15:45.000 verify=crc32c-intel 00:15:45.000 [job0] 00:15:45.000 filename=/dev/nvme0n1 00:15:45.000 Could not set queue depth (nvme0n1) 00:15:45.000 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.000 fio-3.35 00:15:45.000 Starting 1 thread 00:15:45.934 00:15:45.934 job0: (groupid=0, jobs=1): err= 0: pid=2021565: Mon Dec 9 10:26:30 2024 00:15:45.934 read: IOPS=2168, BW=8675KiB/s (8884kB/s)(8684KiB/1001msec) 00:15:45.934 slat (nsec): min=5642, max=38564, avg=11339.51, stdev=4605.47 00:15:45.934 clat (usec): min=181, max=1385, avg=227.75, stdev=38.77 00:15:45.934 lat (usec): min=188, max=1404, avg=239.09, stdev=39.26 00:15:45.934 clat percentiles (usec): 00:15:45.934 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:15:45.934 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:15:45.934 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 289], 00:15:45.934 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 441], 99.95th=[ 553], 00:15:45.934 | 99.99th=[ 1385] 00:15:45.934 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:45.934 slat (usec): min=7, max=33575, avg=27.25, stdev=663.34 00:15:45.934 clat (usec): min=126, max=354, avg=153.93, stdev=17.00 00:15:45.934 lat (usec): min=135, max=33882, avg=181.18, stdev=666.61 00:15:45.934 clat percentiles (usec): 00:15:45.934 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:15:45.934 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:15:45.934 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:15:45.934 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 318], 99.95th=[ 351], 00:15:45.934 | 99.99th=[ 355] 00:15:45.934 bw ( KiB/s): min=10664, max=10664, per=100.00%, avg=10664.00, stdev= 0.00, samples=1 00:15:45.934 iops : min= 2666, max= 2666, avg=2666.00, stdev= 0.00, samples=1 00:15:45.934 lat (usec) : 250=92.79%, 500=7.17%, 750=0.02% 00:15:45.934 lat (msec) : 2=0.02% 00:15:45.934 cpu : usr=2.70%, sys=7.00%, ctx=4735, majf=0, minf=1 00:15:45.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:45.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.934 issued rwts: total=2171,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:45.934 00:15:45.934 Run status group 0 (all jobs): 00:15:45.934 READ: bw=8675KiB/s (8884kB/s), 8675KiB/s-8675KiB/s (8884kB/s-8884kB/s), io=8684KiB (8892kB), run=1001-1001msec 00:15:45.934 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:15:45.934 00:15:45.934 Disk stats (read/write): 00:15:45.934 nvme0n1: ios=2100/2118, merge=0/0, ticks=1266/322, in_queue=1588, util=98.60% 00:15:45.934 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:46.194 rmmod nvme_tcp 00:15:46.194 rmmod nvme_fabrics 00:15:46.194 rmmod nvme_keyring 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2021048 ']' 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2021048 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2021048 ']' 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2021048 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.194 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2021048 00:15:46.195 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.195 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.195 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2021048' 00:15:46.195 killing process with pid 2021048 00:15:46.195 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2021048 00:15:46.195 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2021048 00:15:46.454 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:46.454 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:46.454 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:46.713 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:15:46.713 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:15:46.713 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.714 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:48.619 00:15:48.619 real 0m11.200s 00:15:48.619 user 0m23.150s 00:15:48.619 sys 0m3.414s 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.619 ************************************ 00:15:48.619 END TEST nvmf_nmic 00:15:48.619 ************************************ 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:48.619 ************************************ 00:15:48.619 START TEST nvmf_fio_target 00:15:48.619 ************************************ 00:15:48.619 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:48.876 * Looking for test storage... 00:15:48.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.876 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.876 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.876 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:49.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.136 --rc genhtml_branch_coverage=1 00:15:49.136 --rc genhtml_function_coverage=1 00:15:49.136 --rc genhtml_legend=1 00:15:49.136 --rc geninfo_all_blocks=1 00:15:49.136 --rc geninfo_unexecuted_blocks=1 00:15:49.136 00:15:49.136 ' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:49.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.136 --rc genhtml_branch_coverage=1 00:15:49.136 --rc genhtml_function_coverage=1 00:15:49.136 --rc genhtml_legend=1 00:15:49.136 --rc geninfo_all_blocks=1 00:15:49.136 --rc geninfo_unexecuted_blocks=1 00:15:49.136 00:15:49.136 ' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:49.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.136 --rc genhtml_branch_coverage=1 00:15:49.136 --rc genhtml_function_coverage=1 00:15:49.136 --rc genhtml_legend=1 00:15:49.136 --rc geninfo_all_blocks=1 00:15:49.136 --rc geninfo_unexecuted_blocks=1 00:15:49.136 00:15:49.136 ' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:49.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.136 --rc genhtml_branch_coverage=1 00:15:49.136 --rc genhtml_function_coverage=1 00:15:49.136 --rc genhtml_legend=1 00:15:49.136 --rc geninfo_all_blocks=1 00:15:49.136 --rc geninfo_unexecuted_blocks=1 00:15:49.136 00:15:49.136 ' 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.136 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:49.137 10:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:52.424 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:52.424 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:52.424 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:52.425 Found net devices under 0000:84:00.0: cvl_0_0 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:52.425 Found net devices under 0000:84:00.1: cvl_0_1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:52.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:15:52.425 00:15:52.425 --- 10.0.0.2 ping statistics --- 00:15:52.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.425 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:15:52.425 00:15:52.425 --- 10.0.0.1 ping statistics --- 00:15:52.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.425 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2023920 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2023920 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2023920 ']' 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.425 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.425 [2024-12-09 10:26:37.035983] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:15:52.425 [2024-12-09 10:26:37.036172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.686 [2024-12-09 10:26:37.224528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.945 [2024-12-09 10:26:37.346663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.945 [2024-12-09 10:26:37.346782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.945 [2024-12-09 10:26:37.346823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.945 [2024-12-09 10:26:37.346854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.945 [2024-12-09 10:26:37.346879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.945 [2024-12-09 10:26:37.350446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.945 [2024-12-09 10:26:37.350548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.945 [2024-12-09 10:26:37.350638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.945 [2024-12-09 10:26:37.350642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.945 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:53.204 [2024-12-09 10:26:37.823629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.204 10:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.770 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:53.770 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.028 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:54.028 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.595 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:54.595 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.230 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:55.230 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:55.505 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.097 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:56.097 10:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.354 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:56.354 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.919 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:56.919 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:57.177 10:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.757 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:57.757 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.322 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:58.322 10:26:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.890 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.148 [2024-12-09 10:26:43.765125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.148 10:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:59.715 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:59.973 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:16:00.548 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:16:03.080 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:03.080 [global] 00:16:03.080 thread=1 00:16:03.080 invalidate=1 00:16:03.080 rw=write 00:16:03.080 time_based=1 00:16:03.080 runtime=1 00:16:03.080 ioengine=libaio 00:16:03.080 direct=1 00:16:03.080 bs=4096 00:16:03.080 iodepth=1 00:16:03.080 norandommap=0 00:16:03.080 numjobs=1 00:16:03.080 00:16:03.080 verify_dump=1 00:16:03.080 verify_backlog=512 00:16:03.080 verify_state_save=0 00:16:03.080 do_verify=1 00:16:03.080 verify=crc32c-intel 00:16:03.080 [job0] 00:16:03.080 filename=/dev/nvme0n1 00:16:03.080 [job1] 00:16:03.080 filename=/dev/nvme0n2 00:16:03.080 [job2] 00:16:03.080 filename=/dev/nvme0n3 00:16:03.080 [job3] 00:16:03.080 filename=/dev/nvme0n4 00:16:03.080 Could not set queue depth (nvme0n1) 00:16:03.080 Could not set queue depth (nvme0n2) 00:16:03.080 Could not set queue depth (nvme0n3) 00:16:03.080 Could not set queue depth (nvme0n4) 00:16:03.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.080 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.080 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.080 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.080 fio-3.35 00:16:03.080 Starting 4 threads 00:16:04.457 00:16:04.457 job0: (groupid=0, jobs=1): err= 0: pid=2025264: Mon Dec 9 10:26:48 2024 00:16:04.457 read: IOPS=561, BW=2245KiB/s (2299kB/s)(2292KiB/1021msec) 00:16:04.457 slat (nsec): min=7064, max=28777, avg=8779.17, stdev=2775.21 00:16:04.457 clat (usec): min=165, max=41128, avg=1414.20, stdev=6920.22 00:16:04.457 lat (usec): min=173, max=41145, avg=1422.98, stdev=6921.81 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:16:04.457 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:16:04.457 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 251], 95.00th=[ 269], 00:16:04.457 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:04.457 | 99.99th=[41157] 00:16:04.457 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:16:04.457 slat (nsec): min=9514, max=39028, avg=10985.18, stdev=2405.36 00:16:04.457 clat (usec): min=128, max=321, avg=184.87, stdev=41.14 00:16:04.457 lat (usec): min=138, max=332, avg=195.85, stdev=41.80 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:16:04.457 | 30.00th=[ 151], 40.00th=[ 161], 50.00th=[ 176], 60.00th=[ 200], 00:16:04.457 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 251], 00:16:04.457 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 322], 99.95th=[ 322], 00:16:04.457 | 99.99th=[ 322] 00:16:04.457 bw ( KiB/s): min= 8192, max= 8192, per=83.04%, avg=8192.00, stdev= 0.00, samples=1 00:16:04.457 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:04.457 lat (usec) : 250=92.99%, 500=5.95% 00:16:04.457 lat (msec) : 50=1.06% 00:16:04.457 cpu : usr=0.69%, sys=2.65%, ctx=1597, majf=0, minf=1 00:16:04.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.457 issued rwts: total=573,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.457 job1: (groupid=0, jobs=1): err= 0: pid=2025265: Mon Dec 9 10:26:48 2024 00:16:04.457 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:16:04.457 slat (nsec): min=9321, max=32146, avg=20485.13, stdev=7069.57 00:16:04.457 clat (usec): min=40433, max=41093, avg=40942.37, stdev=126.38 00:16:04.457 lat (usec): min=40442, max=41125, avg=40962.86, stdev=129.02 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:04.457 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.457 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:04.457 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:04.457 | 99.99th=[41157] 00:16:04.457 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:16:04.457 slat (nsec): min=9429, max=32396, avg=11238.85, stdev=3430.22 00:16:04.457 clat (usec): min=145, max=288, avg=171.94, stdev=16.78 00:16:04.457 lat (usec): min=155, max=320, avg=183.18, stdev=17.53 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:16:04.457 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:16:04.457 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 00:16:04.457 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 289], 99.95th=[ 289], 00:16:04.457 | 99.99th=[ 289] 00:16:04.457 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.457 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.457 lat (usec) : 250=95.51%, 500=0.19% 00:16:04.457 lat (msec) : 50=4.30% 00:16:04.457 cpu : usr=0.58%, sys=0.58%, ctx=535, majf=0, minf=1 00:16:04.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.457 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.457 job2: (groupid=0, jobs=1): err= 0: pid=2025266: Mon Dec 9 10:26:48 2024 00:16:04.457 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:16:04.457 slat (nsec): min=9080, max=34203, avg=19522.95, stdev=5094.62 00:16:04.457 clat (usec): min=40899, max=41169, avg=40985.31, stdev=54.06 00:16:04.457 lat (usec): min=40917, max=41178, avg=41004.84, stdev=51.45 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:04.457 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.457 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:04.457 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:04.457 | 99.99th=[41157] 00:16:04.457 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:04.457 slat (nsec): min=8877, max=25803, avg=10706.29, stdev=2321.24 00:16:04.457 clat (usec): min=149, max=298, avg=180.94, stdev=18.36 00:16:04.457 lat (usec): min=160, max=323, avg=191.64, stdev=18.75 00:16:04.457 clat percentiles (usec): 00:16:04.457 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:16:04.457 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:16:04.457 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 215], 00:16:04.457 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 297], 99.95th=[ 297], 00:16:04.457 | 99.99th=[ 297] 00:16:04.457 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.457 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.457 lat (usec) : 250=95.51%, 500=0.37% 00:16:04.457 lat (msec) : 50=4.12% 00:16:04.457 cpu : usr=0.00%, sys=0.80%, ctx=537, majf=0, minf=1 00:16:04.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.458 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.458 job3: (groupid=0, jobs=1): err= 0: pid=2025267: Mon Dec 9 10:26:48 2024 00:16:04.458 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:16:04.458 slat (nsec): min=9824, max=43715, avg=19884.91, stdev=7143.69 00:16:04.458 clat (usec): min=40440, max=41902, avg=41014.65, stdev=259.90 00:16:04.458 lat (usec): min=40459, max=41925, avg=41034.53, stdev=259.67 00:16:04.458 clat percentiles (usec): 00:16:04.458 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:04.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:04.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:04.458 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:04.458 | 99.99th=[41681] 00:16:04.458 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:16:04.458 slat (nsec): min=10716, max=34881, avg=12566.28, stdev=3026.01 00:16:04.458 clat (usec): min=167, max=345, avg=220.09, stdev=24.89 00:16:04.458 lat (usec): min=178, max=358, avg=232.66, stdev=25.55 00:16:04.458 clat percentiles (usec): 00:16:04.458 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 200], 00:16:04.458 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:16:04.458 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 255], 00:16:04.458 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 347], 99.95th=[ 347], 00:16:04.458 | 99.99th=[ 347] 00:16:04.458 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:16:04.458 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:04.458 lat (usec) : 250=86.70%, 500=9.18% 00:16:04.458 lat (msec) : 50=4.12% 00:16:04.458 cpu : usr=0.20%, sys=0.98%, ctx=535, majf=0, minf=1 00:16:04.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.458 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.458 00:16:04.458 Run status group 0 (all jobs): 00:16:04.458 READ: bw=2466KiB/s (2525kB/s), 85.9KiB/s-2245KiB/s (88.0kB/s-2299kB/s), io=2560KiB (2621kB), run=1002-1038msec 00:16:04.458 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4012KiB/s (2020kB/s-4108kB/s), io=10.0MiB (10.5MB), run=1002-1038msec 00:16:04.458 00:16:04.458 Disk stats (read/write): 00:16:04.458 nvme0n1: ios=618/1024, merge=0/0, ticks=632/191, in_queue=823, util=85.37% 00:16:04.458 nvme0n2: ios=68/512, merge=0/0, ticks=803/92, in_queue=895, util=89.17% 00:16:04.458 nvme0n3: ios=78/512, merge=0/0, ticks=1549/94, in_queue=1643, util=91.95% 00:16:04.458 nvme0n4: ios=74/512, merge=0/0, ticks=1219/108, in_queue=1327, util=94.21% 00:16:04.458 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:04.458 [global] 00:16:04.458 thread=1 00:16:04.458 invalidate=1 00:16:04.458 rw=randwrite 00:16:04.458 time_based=1 00:16:04.458 runtime=1 00:16:04.458 ioengine=libaio 00:16:04.458 direct=1 00:16:04.458 bs=4096 00:16:04.458 iodepth=1 00:16:04.458 norandommap=0 00:16:04.458 numjobs=1 00:16:04.458 00:16:04.458 verify_dump=1 00:16:04.458 verify_backlog=512 00:16:04.458 verify_state_save=0 00:16:04.458 do_verify=1 00:16:04.458 verify=crc32c-intel 00:16:04.458 [job0] 00:16:04.458 filename=/dev/nvme0n1 00:16:04.458 [job1] 00:16:04.458 filename=/dev/nvme0n2 00:16:04.458 [job2] 00:16:04.458 filename=/dev/nvme0n3 00:16:04.458 [job3] 00:16:04.458 filename=/dev/nvme0n4 00:16:04.458 Could not set queue depth (nvme0n1) 00:16:04.458 Could not set queue depth (nvme0n2) 00:16:04.458 Could not set queue depth (nvme0n3) 00:16:04.458 Could not set queue depth (nvme0n4) 00:16:04.458 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.458 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.458 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.458 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.458 fio-3.35 00:16:04.458 Starting 4 threads 00:16:05.846 00:16:05.846 job0: (groupid=0, jobs=1): err= 0: pid=2025493: Mon Dec 9 10:26:50 2024 00:16:05.846 read: IOPS=30, BW=121KiB/s (124kB/s)(124KiB/1024msec) 00:16:05.846 slat (nsec): min=7355, max=32834, avg=17697.03, stdev=6440.22 00:16:05.846 clat (usec): min=226, max=41450, avg=29123.66, stdev=18725.14 00:16:05.846 lat (usec): min=246, max=41460, avg=29141.35, stdev=18726.59 00:16:05.846 clat percentiles (usec): 00:16:05.846 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 396], 00:16:05.846 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:16:05.846 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:05.846 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:05.846 | 99.99th=[41681] 00:16:05.846 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:16:05.846 slat (nsec): min=8280, max=40736, avg=12816.64, stdev=3795.80 00:16:05.846 clat (usec): min=137, max=465, avg=217.95, stdev=75.12 00:16:05.846 lat (usec): min=149, max=486, avg=230.76, stdev=75.11 00:16:05.846 clat percentiles (usec): 00:16:05.846 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:16:05.846 | 30.00th=[ 165], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 217], 00:16:05.846 | 70.00th=[ 231], 80.00th=[ 260], 90.00th=[ 326], 95.00th=[ 400], 00:16:05.846 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 465], 99.95th=[ 465], 00:16:05.846 | 99.99th=[ 465] 00:16:05.847 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=1 00:16:05.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:05.847 lat (usec) : 250=73.66%, 500=22.28% 00:16:05.847 lat (msec) : 50=4.05% 00:16:05.847 cpu : usr=0.20%, sys=0.78%, ctx=546, majf=0, minf=1 00:16:05.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.847 job1: (groupid=0, jobs=1): err= 0: pid=2025494: Mon Dec 9 10:26:50 2024 00:16:05.847 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:16:05.847 slat (nsec): min=8196, max=20023, avg=17183.50, stdev=2239.08 00:16:05.847 clat (usec): min=40659, max=41128, avg=40970.04, stdev=89.43 00:16:05.847 lat (usec): min=40667, max=41146, avg=40987.22, stdev=90.89 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:05.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:05.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:05.847 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:05.847 | 99.99th=[41157] 00:16:05.847 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:16:05.847 slat (usec): min=7, max=27800, avg=64.66, stdev=1228.18 00:16:05.847 clat (usec): min=123, max=670, avg=195.42, stdev=48.02 00:16:05.847 lat (usec): min=132, max=28058, avg=260.08, stdev=1231.90 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:16:05.847 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 194], 60.00th=[ 208], 00:16:05.847 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 273], 00:16:05.847 | 99.00th=[ 293], 99.50th=[ 420], 99.90th=[ 668], 99.95th=[ 668], 00:16:05.847 | 99.99th=[ 668] 00:16:05.847 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=1 00:16:05.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:05.847 lat (usec) : 250=87.64%, 500=7.87%, 750=0.37% 00:16:05.847 lat (msec) : 50=4.12% 00:16:05.847 cpu : usr=0.10%, sys=0.68%, ctx=537, majf=0, minf=1 00:16:05.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.847 job2: (groupid=0, jobs=1): err= 0: pid=2025496: Mon Dec 9 10:26:50 2024 00:16:05.847 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:16:05.847 slat (nsec): min=11366, max=41497, avg=23301.62, stdev=8782.92 00:16:05.847 clat (usec): min=40884, max=41351, avg=40987.38, stdev=94.22 00:16:05.847 lat (usec): min=40916, max=41363, avg=41010.68, stdev=90.71 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:05.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:05.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:05.847 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:05.847 | 99.99th=[41157] 00:16:05.847 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:05.847 slat (nsec): min=9745, max=36965, avg=14380.81, stdev=4692.56 00:16:05.847 clat (usec): min=153, max=474, avg=253.20, stdev=70.83 00:16:05.847 lat (usec): min=164, max=494, avg=267.58, stdev=71.64 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 186], 00:16:05.847 | 30.00th=[ 202], 40.00th=[ 219], 50.00th=[ 237], 60.00th=[ 255], 00:16:05.847 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 392], 00:16:05.847 | 99.00th=[ 433], 99.50th=[ 453], 99.90th=[ 474], 99.95th=[ 474], 00:16:05.847 | 99.99th=[ 474] 00:16:05.847 bw ( KiB/s): min= 4096, max= 4096, per=34.57%, avg=4096.00, stdev= 0.00, samples=1 00:16:05.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:05.847 lat (usec) : 250=55.72%, 500=40.34% 00:16:05.847 lat (msec) : 50=3.94% 00:16:05.847 cpu : usr=0.50%, sys=1.00%, ctx=533, majf=0, minf=2 00:16:05.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.847 job3: (groupid=0, jobs=1): err= 0: pid=2025502: Mon Dec 9 10:26:50 2024 00:16:05.847 read: IOPS=1368, BW=5473KiB/s (5604kB/s)(5648KiB/1032msec) 00:16:05.847 slat (nsec): min=6605, max=48085, avg=14332.21, stdev=5389.96 00:16:05.847 clat (usec): min=172, max=41303, avg=488.18, stdev=2962.69 00:16:05.847 lat (usec): min=190, max=41315, avg=502.51, stdev=2963.10 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:16:05.847 | 30.00th=[ 229], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 260], 00:16:05.847 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 347], 95.00th=[ 449], 00:16:05.847 | 99.00th=[ 545], 99.50th=[29230], 99.90th=[41157], 99.95th=[41157], 00:16:05.847 | 99.99th=[41157] 00:16:05.847 write: IOPS=1488, BW=5953KiB/s (6096kB/s)(6144KiB/1032msec); 0 zone resets 00:16:05.847 slat (nsec): min=8154, max=48763, avg=13247.54, stdev=5729.67 00:16:05.847 clat (usec): min=129, max=722, avg=188.46, stdev=44.08 00:16:05.847 lat (usec): min=138, max=731, avg=201.71, stdev=45.59 00:16:05.847 clat percentiles (usec): 00:16:05.847 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:16:05.847 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 180], 60.00th=[ 192], 00:16:05.847 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 241], 95.00th=[ 273], 00:16:05.847 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 494], 99.95th=[ 725], 00:16:05.847 | 99.99th=[ 725] 00:16:05.847 bw ( KiB/s): min= 4096, max= 8192, per=51.85%, avg=6144.00, stdev=2896.31, samples=2 00:16:05.847 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:16:05.847 lat (usec) : 250=72.39%, 500=26.80%, 750=0.51%, 1000=0.03% 00:16:05.847 lat (msec) : 50=0.27% 00:16:05.847 cpu : usr=1.26%, sys=4.85%, ctx=2949, majf=0, minf=1 00:16:05.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.847 issued rwts: total=1412,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.847 00:16:05.847 Run status group 0 (all jobs): 00:16:05.847 READ: bw=5732KiB/s (5869kB/s), 83.9KiB/s-5473KiB/s (85.9kB/s-5604kB/s), io=5944KiB (6087kB), run=1001-1037msec 00:16:05.847 WRITE: bw=11.6MiB/s (12.1MB/s), 1975KiB/s-5953KiB/s (2022kB/s-6096kB/s), io=12.0MiB (12.6MB), run=1001-1037msec 00:16:05.847 00:16:05.847 Disk stats (read/write): 00:16:05.847 nvme0n1: ios=44/512, merge=0/0, ticks=1521/111, in_queue=1632, util=85.07% 00:16:05.848 nvme0n2: ios=58/512, merge=0/0, ticks=804/98, in_queue=902, util=88.14% 00:16:05.848 nvme0n3: ios=73/512, merge=0/0, ticks=735/127, in_queue=862, util=94.23% 00:16:05.848 nvme0n4: ios=1085/1536, merge=0/0, ticks=1179/291, in_queue=1470, util=93.81% 00:16:05.848 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:05.848 [global] 00:16:05.848 thread=1 00:16:05.848 invalidate=1 00:16:05.848 rw=write 00:16:05.848 time_based=1 00:16:05.848 runtime=1 00:16:05.848 ioengine=libaio 00:16:05.848 direct=1 00:16:05.848 bs=4096 00:16:05.848 iodepth=128 00:16:05.848 norandommap=0 00:16:05.848 numjobs=1 00:16:05.848 00:16:05.848 verify_dump=1 00:16:05.848 verify_backlog=512 00:16:05.848 verify_state_save=0 00:16:05.848 do_verify=1 00:16:05.848 verify=crc32c-intel 00:16:05.848 [job0] 00:16:05.848 filename=/dev/nvme0n1 00:16:05.848 [job1] 00:16:05.848 filename=/dev/nvme0n2 00:16:05.848 [job2] 00:16:05.848 filename=/dev/nvme0n3 00:16:05.848 [job3] 00:16:05.848 filename=/dev/nvme0n4 00:16:05.848 Could not set queue depth (nvme0n1) 00:16:05.848 Could not set queue depth (nvme0n2) 00:16:05.848 Could not set queue depth (nvme0n3) 00:16:05.848 Could not set queue depth (nvme0n4) 00:16:06.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.114 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.114 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.114 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.114 fio-3.35 00:16:06.114 Starting 4 threads 00:16:07.488 00:16:07.488 job0: (groupid=0, jobs=1): err= 0: pid=2025730: Mon Dec 9 10:26:51 2024 00:16:07.488 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:16:07.488 slat (usec): min=3, max=10781, avg=118.06, stdev=719.41 00:16:07.488 clat (usec): min=3033, max=56118, avg=13263.22, stdev=5892.69 00:16:07.488 lat (usec): min=3049, max=56125, avg=13381.28, stdev=5943.18 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 5800], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[10028], 00:16:07.488 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11731], 60.00th=[11863], 00:16:07.488 | 70.00th=[12911], 80.00th=[15401], 90.00th=[20579], 95.00th=[26870], 00:16:07.488 | 99.00th=[33817], 99.50th=[38536], 99.90th=[55837], 99.95th=[56361], 00:16:07.488 | 99.99th=[56361] 00:16:07.488 write: IOPS=3986, BW=15.6MiB/s (16.3MB/s)(15.8MiB/1013msec); 0 zone resets 00:16:07.488 slat (usec): min=4, max=11997, avg=125.60, stdev=531.11 00:16:07.488 clat (usec): min=2573, max=68033, avg=19969.42, stdev=8846.40 00:16:07.488 lat (usec): min=2579, max=68038, avg=20095.02, stdev=8908.68 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 3556], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[10421], 00:16:07.488 | 30.00th=[15664], 40.00th=[18482], 50.00th=[20841], 60.00th=[21627], 00:16:07.488 | 70.00th=[23200], 80.00th=[27132], 90.00th=[31065], 95.00th=[35390], 00:16:07.488 | 99.00th=[44303], 99.50th=[51119], 99.90th=[54264], 99.95th=[54264], 00:16:07.488 | 99.99th=[67634] 00:16:07.488 bw ( KiB/s): min=14336, max=16944, per=27.14%, avg=15640.00, stdev=1844.13, samples=2 00:16:07.488 iops : min= 3584, max= 4236, avg=3910.00, stdev=461.03, samples=2 00:16:07.488 lat (msec) : 4=0.84%, 10=17.11%, 20=48.79%, 50=32.83%, 100=0.43% 00:16:07.488 cpu : usr=2.96%, sys=5.34%, ctx=453, majf=0, minf=1 00:16:07.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:07.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.488 issued rwts: total=3584,4038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.488 job1: (groupid=0, jobs=1): err= 0: pid=2025734: Mon Dec 9 10:26:51 2024 00:16:07.488 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:16:07.488 slat (usec): min=4, max=11777, avg=172.94, stdev=918.51 00:16:07.488 clat (usec): min=11887, max=52679, avg=20896.15, stdev=9597.95 00:16:07.488 lat (usec): min=11896, max=52699, avg=21069.09, stdev=9663.44 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14222], 20.00th=[15664], 00:16:07.488 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[17171], 00:16:07.488 | 70.00th=[18220], 80.00th=[25297], 90.00th=[39060], 95.00th=[46400], 00:16:07.488 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:16:07.488 | 99.99th=[52691] 00:16:07.488 write: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1006msec); 0 zone resets 00:16:07.488 slat (usec): min=4, max=37075, avg=201.19, stdev=1282.42 00:16:07.488 clat (msec): min=5, max=106, avg=24.86, stdev=15.67 00:16:07.488 lat (msec): min=6, max=106, avg=25.06, stdev=15.80 00:16:07.488 clat percentiles (msec): 00:16:07.488 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:16:07.488 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 22], 00:16:07.488 | 70.00th=[ 26], 80.00th=[ 37], 90.00th=[ 54], 95.00th=[ 60], 00:16:07.488 | 99.00th=[ 70], 99.50th=[ 70], 99.90th=[ 70], 99.95th=[ 72], 00:16:07.488 | 99.99th=[ 107] 00:16:07.488 bw ( KiB/s): min= 8208, max=12288, per=17.79%, avg=10248.00, stdev=2885.00, samples=2 00:16:07.488 iops : min= 2052, max= 3072, avg=2562.00, stdev=721.25, samples=2 00:16:07.488 lat (msec) : 10=0.77%, 20=63.95%, 50=28.52%, 100=6.74%, 250=0.02% 00:16:07.488 cpu : usr=3.08%, sys=5.07%, ctx=313, majf=0, minf=1 00:16:07.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:07.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.488 issued rwts: total=2560,2650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.488 job2: (groupid=0, jobs=1): err= 0: pid=2025751: Mon Dec 9 10:26:51 2024 00:16:07.488 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:16:07.488 slat (usec): min=2, max=12000, avg=99.81, stdev=693.35 00:16:07.488 clat (usec): min=1314, max=60496, avg=13877.42, stdev=5743.12 00:16:07.488 lat (usec): min=1330, max=60514, avg=13977.23, stdev=5789.42 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 1942], 5.00th=[ 6063], 10.00th=[ 8848], 20.00th=[11469], 00:16:07.488 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13960], 00:16:07.488 | 70.00th=[15270], 80.00th=[16057], 90.00th=[19006], 95.00th=[20579], 00:16:07.488 | 99.00th=[45351], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:16:07.488 | 99.99th=[60556] 00:16:07.488 write: IOPS=4210, BW=16.4MiB/s (17.2MB/s)(16.7MiB/1014msec); 0 zone resets 00:16:07.488 slat (usec): min=4, max=11535, avg=121.14, stdev=655.43 00:16:07.488 clat (usec): min=1723, max=58619, avg=16741.09, stdev=12257.42 00:16:07.488 lat (usec): min=1743, max=58626, avg=16862.22, stdev=12340.82 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 5997], 5.00th=[ 7504], 10.00th=[ 9765], 20.00th=[10945], 00:16:07.488 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:16:07.488 | 70.00th=[13435], 80.00th=[15401], 90.00th=[39060], 95.00th=[52167], 00:16:07.488 | 99.00th=[54264], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:16:07.488 | 99.99th=[58459] 00:16:07.488 bw ( KiB/s): min=15544, max=17584, per=28.75%, avg=16564.00, stdev=1442.50, samples=2 00:16:07.488 iops : min= 3886, max= 4396, avg=4141.00, stdev=360.62, samples=2 00:16:07.488 lat (msec) : 2=0.71%, 4=1.10%, 10=10.68%, 20=77.43%, 50=6.20% 00:16:07.488 lat (msec) : 100=3.89% 00:16:07.488 cpu : usr=2.96%, sys=5.23%, ctx=425, majf=0, minf=1 00:16:07.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:07.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.488 issued rwts: total=4096,4269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.488 job3: (groupid=0, jobs=1): err= 0: pid=2025760: Mon Dec 9 10:26:51 2024 00:16:07.488 read: IOPS=3813, BW=14.9MiB/s (15.6MB/s)(15.6MiB/1045msec) 00:16:07.488 slat (usec): min=4, max=21709, avg=126.34, stdev=805.24 00:16:07.488 clat (usec): min=7904, max=60919, avg=17123.19, stdev=9816.46 00:16:07.488 lat (usec): min=8177, max=60954, avg=17249.53, stdev=9868.19 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11863], 20.00th=[12387], 00:16:07.488 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:16:07.488 | 70.00th=[15270], 80.00th=[16909], 90.00th=[31327], 95.00th=[45876], 00:16:07.488 | 99.00th=[52691], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:16:07.488 | 99.99th=[61080] 00:16:07.488 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:16:07.488 slat (usec): min=6, max=24652, avg=112.11, stdev=683.80 00:16:07.488 clat (usec): min=6983, max=56282, avg=15612.50, stdev=6312.80 00:16:07.488 lat (usec): min=6993, max=56295, avg=15724.62, stdev=6355.14 00:16:07.488 clat percentiles (usec): 00:16:07.488 | 1.00th=[ 8586], 5.00th=[11469], 10.00th=[12387], 20.00th=[12649], 00:16:07.488 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:16:07.488 | 70.00th=[15139], 80.00th=[16909], 90.00th=[22938], 95.00th=[28705], 00:16:07.488 | 99.00th=[47973], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:16:07.488 | 99.99th=[56361] 00:16:07.488 bw ( KiB/s): min=16384, max=16384, per=28.43%, avg=16384.00, stdev= 0.00, samples=2 00:16:07.488 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:16:07.488 lat (msec) : 10=2.30%, 20=83.67%, 50=12.47%, 100=1.56% 00:16:07.488 cpu : usr=4.31%, sys=7.76%, ctx=460, majf=0, minf=1 00:16:07.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:07.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.488 issued rwts: total=3985,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.488 00:16:07.488 Run status group 0 (all jobs): 00:16:07.488 READ: bw=53.2MiB/s (55.8MB/s), 9.94MiB/s-15.8MiB/s (10.4MB/s-16.5MB/s), io=55.6MiB (58.3MB), run=1006-1045msec 00:16:07.489 WRITE: bw=56.3MiB/s (59.0MB/s), 10.3MiB/s-16.4MiB/s (10.8MB/s-17.2MB/s), io=58.8MiB (61.7MB), run=1006-1045msec 00:16:07.489 00:16:07.489 Disk stats (read/write): 00:16:07.489 nvme0n1: ios=3122/3295, merge=0/0, ticks=36292/55500, in_queue=91792, util=87.17% 00:16:07.489 nvme0n2: ios=1990/2048, merge=0/0, ticks=13552/18476, in_queue=32028, util=95.09% 00:16:07.489 nvme0n3: ios=3183/3584, merge=0/0, ticks=26893/42652, in_queue=69545, util=92.00% 00:16:07.489 nvme0n4: ios=3289/3584, merge=0/0, ticks=25343/25261, in_queue=50604, util=98.59% 00:16:07.489 10:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:07.489 [global] 00:16:07.489 thread=1 00:16:07.489 invalidate=1 00:16:07.489 rw=randwrite 00:16:07.489 time_based=1 00:16:07.489 runtime=1 00:16:07.489 ioengine=libaio 00:16:07.489 direct=1 00:16:07.489 bs=4096 00:16:07.489 iodepth=128 00:16:07.489 norandommap=0 00:16:07.489 numjobs=1 00:16:07.489 00:16:07.489 verify_dump=1 00:16:07.489 verify_backlog=512 00:16:07.489 verify_state_save=0 00:16:07.489 do_verify=1 00:16:07.489 verify=crc32c-intel 00:16:07.489 [job0] 00:16:07.489 filename=/dev/nvme0n1 00:16:07.489 [job1] 00:16:07.489 filename=/dev/nvme0n2 00:16:07.489 [job2] 00:16:07.489 filename=/dev/nvme0n3 00:16:07.489 [job3] 00:16:07.489 filename=/dev/nvme0n4 00:16:07.489 Could not set queue depth (nvme0n1) 00:16:07.489 Could not set queue depth (nvme0n2) 00:16:07.489 Could not set queue depth (nvme0n3) 00:16:07.489 Could not set queue depth (nvme0n4) 00:16:07.489 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.489 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.489 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.489 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.489 fio-3.35 00:16:07.489 Starting 4 threads 00:16:08.876 00:16:08.876 job0: (groupid=0, jobs=1): err= 0: pid=2026078: Mon Dec 9 10:26:53 2024 00:16:08.876 read: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.9MiB/1049msec) 00:16:08.876 slat (usec): min=4, max=27404, avg=142.94, stdev=1035.81 00:16:08.876 clat (msec): min=6, max=112, avg=19.24, stdev=18.41 00:16:08.876 lat (msec): min=7, max=112, avg=19.38, stdev=18.54 00:16:08.876 clat percentiles (msec): 00:16:08.876 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:16:08.876 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:16:08.876 | 70.00th=[ 14], 80.00th=[ 21], 90.00th=[ 50], 95.00th=[ 64], 00:16:08.876 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 113], 99.95th=[ 113], 00:16:08.876 | 99.99th=[ 113] 00:16:08.876 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:16:08.876 slat (usec): min=4, max=9550, avg=94.26, stdev=541.31 00:16:08.876 clat (usec): min=4229, max=49434, avg=13319.93, stdev=5469.18 00:16:08.876 lat (usec): min=4239, max=49443, avg=13414.19, stdev=5517.54 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[10945], 00:16:08.876 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:16:08.876 | 70.00th=[12256], 80.00th=[13173], 90.00th=[20841], 95.00th=[23987], 00:16:08.876 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[43254], 00:16:08.876 | 99.99th=[49546] 00:16:08.876 bw ( KiB/s): min=10664, max=22104, per=27.20%, avg=16384.00, stdev=8089.30, samples=2 00:16:08.876 iops : min= 2666, max= 5526, avg=4096.00, stdev=2022.33, samples=2 00:16:08.876 lat (msec) : 10=12.91%, 20=70.58%, 50=12.20%, 100=4.17%, 250=0.12% 00:16:08.876 cpu : usr=4.10%, sys=6.87%, ctx=413, majf=0, minf=1 00:16:08.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:08.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.876 issued rwts: total=4073,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.876 job1: (groupid=0, jobs=1): err= 0: pid=2026080: Mon Dec 9 10:26:53 2024 00:16:08.876 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:16:08.876 slat (usec): min=3, max=10695, avg=102.66, stdev=611.20 00:16:08.876 clat (usec): min=5892, max=54771, avg=13747.99, stdev=6083.38 00:16:08.876 lat (usec): min=5908, max=56626, avg=13850.65, stdev=6127.41 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 6652], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 8848], 00:16:08.876 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11600], 60.00th=[13829], 00:16:08.876 | 70.00th=[16581], 80.00th=[18482], 90.00th=[21103], 95.00th=[23200], 00:16:08.876 | 99.00th=[33162], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:16:08.876 | 99.99th=[54789] 00:16:08.876 write: IOPS=3945, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1006msec); 0 zone resets 00:16:08.876 slat (usec): min=3, max=9900, avg=151.95, stdev=798.81 00:16:08.876 clat (usec): min=3052, max=67797, avg=19563.81, stdev=12946.14 00:16:08.876 lat (usec): min=5642, max=67807, avg=19715.76, stdev=13041.70 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 6521], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9372], 00:16:08.876 | 30.00th=[ 9765], 40.00th=[13566], 50.00th=[14877], 60.00th=[17957], 00:16:08.876 | 70.00th=[18482], 80.00th=[30802], 90.00th=[38011], 95.00th=[50070], 00:16:08.876 | 99.00th=[62129], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:16:08.876 | 99.99th=[67634] 00:16:08.876 bw ( KiB/s): min=12536, max=18192, per=25.51%, avg=15364.00, stdev=3999.40, samples=2 00:16:08.876 iops : min= 3134, max= 4548, avg=3841.00, stdev=999.85, samples=2 00:16:08.876 lat (msec) : 4=0.01%, 10=33.15%, 20=45.90%, 50=18.05%, 100=2.89% 00:16:08.876 cpu : usr=2.99%, sys=6.57%, ctx=452, majf=0, minf=1 00:16:08.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:08.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.876 issued rwts: total=3584,3969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.876 job2: (groupid=0, jobs=1): err= 0: pid=2026086: Mon Dec 9 10:26:53 2024 00:16:08.876 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:16:08.876 slat (usec): min=3, max=13977, avg=134.31, stdev=935.75 00:16:08.876 clat (usec): min=4487, max=67642, avg=17012.39, stdev=9423.51 00:16:08.876 lat (usec): min=4497, max=67661, avg=17146.70, stdev=9507.90 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[11469], 00:16:08.876 | 30.00th=[11863], 40.00th=[12125], 50.00th=[13566], 60.00th=[15401], 00:16:08.876 | 70.00th=[16909], 80.00th=[19268], 90.00th=[32900], 95.00th=[37487], 00:16:08.876 | 99.00th=[53740], 99.50th=[53740], 99.90th=[57934], 99.95th=[58983], 00:16:08.876 | 99.99th=[67634] 00:16:08.876 write: IOPS=4102, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1009msec); 0 zone resets 00:16:08.876 slat (usec): min=4, max=11043, avg=94.40, stdev=644.24 00:16:08.876 clat (usec): min=1600, max=51315, avg=14076.01, stdev=6799.57 00:16:08.876 lat (usec): min=1612, max=51332, avg=14170.42, stdev=6851.28 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 3458], 5.00th=[ 5932], 10.00th=[ 7767], 20.00th=[10814], 00:16:08.876 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12387], 60.00th=[12649], 00:16:08.876 | 70.00th=[15008], 80.00th=[16712], 90.00th=[20841], 95.00th=[29754], 00:16:08.876 | 99.00th=[39060], 99.50th=[40633], 99.90th=[50070], 99.95th=[50070], 00:16:08.876 | 99.99th=[51119] 00:16:08.876 bw ( KiB/s): min=11912, max=20856, per=27.20%, avg=16384.00, stdev=6324.36, samples=2 00:16:08.876 iops : min= 2978, max= 5214, avg=4096.00, stdev=1581.09, samples=2 00:16:08.876 lat (msec) : 2=0.04%, 4=0.90%, 10=11.06%, 20=73.02%, 50=14.17% 00:16:08.876 lat (msec) : 100=0.81% 00:16:08.876 cpu : usr=4.46%, sys=6.85%, ctx=344, majf=0, minf=1 00:16:08.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:08.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.876 issued rwts: total=4096,4139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.876 job3: (groupid=0, jobs=1): err= 0: pid=2026087: Mon Dec 9 10:26:53 2024 00:16:08.876 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:16:08.876 slat (usec): min=3, max=22469, avg=148.68, stdev=1069.16 00:16:08.876 clat (usec): min=6724, max=47719, avg=19424.61, stdev=7696.66 00:16:08.876 lat (usec): min=6728, max=47739, avg=19573.30, stdev=7786.10 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 6980], 5.00th=[10814], 10.00th=[12256], 20.00th=[12649], 00:16:08.876 | 30.00th=[13698], 40.00th=[16909], 50.00th=[17957], 60.00th=[18482], 00:16:08.876 | 70.00th=[20579], 80.00th=[25560], 90.00th=[33162], 95.00th=[35914], 00:16:08.876 | 99.00th=[37487], 99.50th=[37487], 99.90th=[46400], 99.95th=[46924], 00:16:08.876 | 99.99th=[47973] 00:16:08.876 write: IOPS=3561, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:16:08.876 slat (usec): min=4, max=15182, avg=124.44, stdev=868.18 00:16:08.876 clat (usec): min=4589, max=40427, avg=16041.17, stdev=5803.32 00:16:08.876 lat (usec): min=6071, max=40444, avg=16165.61, stdev=5878.21 00:16:08.876 clat percentiles (usec): 00:16:08.876 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[10683], 20.00th=[12518], 00:16:08.876 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[14222], 00:16:08.876 | 70.00th=[16319], 80.00th=[20841], 90.00th=[27395], 95.00th=[28181], 00:16:08.876 | 99.00th=[30802], 99.50th=[32637], 99.90th=[36963], 99.95th=[37487], 00:16:08.876 | 99.99th=[40633] 00:16:08.876 bw ( KiB/s): min=12288, max=16384, per=23.80%, avg=14336.00, stdev=2896.31, samples=2 00:16:08.876 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:08.876 lat (msec) : 10=3.69%, 20=69.91%, 50=26.40% 00:16:08.876 cpu : usr=2.88%, sys=3.48%, ctx=218, majf=0, minf=1 00:16:08.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:08.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.876 issued rwts: total=3584,3590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.876 00:16:08.876 Run status group 0 (all jobs): 00:16:08.876 READ: bw=57.1MiB/s (59.9MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.6MB/s), io=59.9MiB (62.8MB), run=1006-1049msec 00:16:08.876 WRITE: bw=58.8MiB/s (61.7MB/s), 13.9MiB/s-16.0MiB/s (14.6MB/s-16.8MB/s), io=61.7MiB (64.7MB), run=1006-1049msec 00:16:08.876 00:16:08.876 Disk stats (read/write): 00:16:08.877 nvme0n1: ios=3760/4096, merge=0/0, ticks=17319/16187, in_queue=33506, util=94.49% 00:16:08.877 nvme0n2: ios=2595/2787, merge=0/0, ticks=19147/33003, in_queue=52150, util=97.56% 00:16:08.877 nvme0n3: ios=3607/4020, merge=0/0, ticks=42935/45194, in_queue=88129, util=89.32% 00:16:08.877 nvme0n4: ios=2883/3072, merge=0/0, ticks=21547/16831, in_queue=38378, util=97.25% 00:16:08.877 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:08.877 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2026223 00:16:08.877 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:08.877 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:08.877 [global] 00:16:08.877 thread=1 00:16:08.877 invalidate=1 00:16:08.877 rw=read 00:16:08.877 time_based=1 00:16:08.877 runtime=10 00:16:08.877 ioengine=libaio 00:16:08.877 direct=1 00:16:08.877 bs=4096 00:16:08.877 iodepth=1 00:16:08.877 norandommap=1 00:16:08.877 numjobs=1 00:16:08.877 00:16:08.877 [job0] 00:16:08.877 filename=/dev/nvme0n1 00:16:08.877 [job1] 00:16:08.877 filename=/dev/nvme0n2 00:16:08.877 [job2] 00:16:08.877 filename=/dev/nvme0n3 00:16:08.877 [job3] 00:16:08.877 filename=/dev/nvme0n4 00:16:08.877 Could not set queue depth (nvme0n1) 00:16:08.877 Could not set queue depth (nvme0n2) 00:16:08.877 Could not set queue depth (nvme0n3) 00:16:08.877 Could not set queue depth (nvme0n4) 00:16:09.135 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.135 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.135 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.135 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.135 fio-3.35 00:16:09.135 Starting 4 threads 00:16:12.418 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:12.418 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:12.418 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=323584, buflen=4096 00:16:12.418 fio: pid=2026314, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.418 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.418 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:12.674 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36687872, buflen=4096 00:16:12.674 fio: pid=2026313, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.931 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=59801600, buflen=4096 00:16:12.931 fio: pid=2026311, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:12.931 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:12.931 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:13.188 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.188 10:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:13.188 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=23023616, buflen=4096 00:16:13.188 fio: pid=2026312, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:13.446 00:16:13.446 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2026311: Mon Dec 9 10:26:57 2024 00:16:13.446 read: IOPS=3964, BW=15.5MiB/s (16.2MB/s)(57.0MiB/3683msec) 00:16:13.446 slat (usec): min=6, max=34854, avg=12.47, stdev=317.57 00:16:13.446 clat (usec): min=170, max=41144, avg=235.45, stdev=891.25 00:16:13.446 lat (usec): min=177, max=41164, avg=247.92, stdev=946.94 00:16:13.446 clat percentiles (usec): 00:16:13.446 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:16:13.446 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:16:13.446 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 251], 95.00th=[ 281], 00:16:13.446 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 873], 99.95th=[ 1139], 00:16:13.446 | 99.99th=[41157] 00:16:13.446 bw ( KiB/s): min= 5768, max=18447, per=55.51%, avg=15899.29, stdev=4521.42, samples=7 00:16:13.446 iops : min= 1442, max= 4611, avg=3974.71, stdev=1130.28, samples=7 00:16:13.446 lat (usec) : 250=89.97%, 500=9.88%, 750=0.02%, 1000=0.03% 00:16:13.446 lat (msec) : 2=0.03%, 50=0.05% 00:16:13.446 cpu : usr=1.71%, sys=5.59%, ctx=14606, majf=0, minf=1 00:16:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.446 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.446 issued rwts: total=14601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.446 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2026312: Mon Dec 9 10:26:57 2024 00:16:13.446 read: IOPS=1375, BW=5503KiB/s (5635kB/s)(22.0MiB/4086msec) 00:16:13.446 slat (usec): min=5, max=24889, avg=21.61, stdev=446.40 00:16:13.446 clat (usec): min=172, max=42032, avg=702.81, stdev=4190.78 00:16:13.446 lat (usec): min=179, max=51009, avg=723.02, stdev=4229.76 00:16:13.446 clat percentiles (usec): 00:16:13.446 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 215], 00:16:13.446 | 30.00th=[ 231], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 285], 00:16:13.446 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 334], 00:16:13.446 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:16:13.446 | 99.99th=[42206] 00:16:13.446 bw ( KiB/s): min= 96, max=15512, per=22.21%, avg=6361.71, stdev=7325.42, samples=7 00:16:13.446 iops : min= 24, max= 3878, avg=1590.43, stdev=1831.35, samples=7 00:16:13.446 lat (usec) : 250=35.91%, 500=62.52%, 750=0.21%, 1000=0.20% 00:16:13.446 lat (msec) : 2=0.07%, 50=1.07% 00:16:13.446 cpu : usr=0.73%, sys=1.71%, ctx=5628, majf=0, minf=2 00:16:13.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.446 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.446 issued rwts: total=5622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.446 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.446 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2026313: Mon Dec 9 10:26:57 2024 00:16:13.446 read: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(35.0MiB/3343msec) 00:16:13.446 slat (nsec): min=7151, max=34739, avg=9019.26, stdev=1703.98 00:16:13.447 clat (usec): min=185, max=42930, avg=359.20, stdev=2124.80 00:16:13.447 lat (usec): min=193, max=42948, avg=368.22, stdev=2125.24 00:16:13.447 clat percentiles (usec): 00:16:13.447 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:16:13.447 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 245], 00:16:13.447 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:16:13.447 | 99.00th=[ 400], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:16:13.447 | 99.99th=[42730] 00:16:13.447 bw ( KiB/s): min= 4976, max=17712, per=41.66%, avg=11932.00, stdev=4638.41, samples=6 00:16:13.447 iops : min= 1244, max= 4428, avg=2983.00, stdev=1159.60, samples=6 00:16:13.447 lat (usec) : 250=62.08%, 500=37.46%, 750=0.04%, 1000=0.08% 00:16:13.447 lat (msec) : 2=0.03%, 10=0.01%, 50=0.28% 00:16:13.447 cpu : usr=1.35%, sys=3.86%, ctx=8961, majf=0, minf=2 00:16:13.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.447 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.447 issued rwts: total=8958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.447 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2026314: Mon Dec 9 10:26:57 2024 00:16:13.447 read: IOPS=27, BW=107KiB/s (110kB/s)(316KiB/2950msec) 00:16:13.447 slat (nsec): min=8267, max=35064, avg=15210.16, stdev=5726.74 00:16:13.447 clat (usec): min=266, max=42501, avg=37018.59, stdev=12401.84 00:16:13.447 lat (usec): min=274, max=42519, avg=37033.78, stdev=12403.32 00:16:13.447 clat percentiles (usec): 00:16:13.447 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 529], 20.00th=[41157], 00:16:13.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:13.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:13.447 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:13.447 | 99.99th=[42730] 00:16:13.447 bw ( KiB/s): min= 96, max= 104, per=0.34%, avg=97.60, stdev= 3.58, samples=5 00:16:13.447 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:16:13.447 lat (usec) : 500=8.75%, 750=1.25% 00:16:13.447 lat (msec) : 50=88.75% 00:16:13.447 cpu : usr=0.07%, sys=0.00%, ctx=84, majf=0, minf=1 00:16:13.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.447 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.447 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.447 00:16:13.447 Run status group 0 (all jobs): 00:16:13.447 READ: bw=28.0MiB/s (29.3MB/s), 107KiB/s-15.5MiB/s (110kB/s-16.2MB/s), io=114MiB (120MB), run=2950-4086msec 00:16:13.447 00:16:13.447 Disk stats (read/write): 00:16:13.447 nvme0n1: ios=14326/0, merge=0/0, ticks=3352/0, in_queue=3352, util=94.34% 00:16:13.447 nvme0n2: ios=5652/0, merge=0/0, ticks=3955/0, in_queue=3955, util=98.24% 00:16:13.447 nvme0n3: ios=8993/0, merge=0/0, ticks=3439/0, in_queue=3439, util=99.59% 00:16:13.447 nvme0n4: ios=114/0, merge=0/0, ticks=2969/0, in_queue=2969, util=99.56% 00:16:14.011 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.012 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:14.270 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.270 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:14.835 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.835 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:15.399 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.399 10:26:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:15.655 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:15.655 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2026223 00:16:15.655 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:15.655 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:15.913 nvmf hotplug test: fio failed as expected 00:16:15.913 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.479 10:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.479 rmmod nvme_tcp 00:16:16.479 rmmod nvme_fabrics 00:16:16.479 rmmod nvme_keyring 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2023920 ']' 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2023920 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2023920 ']' 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2023920 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023920 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023920' 00:16:16.479 killing process with pid 2023920 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2023920 00:16:16.479 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2023920 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.046 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:18.951 00:16:18.951 real 0m30.211s 00:16:18.951 user 1m47.528s 00:16:18.951 sys 0m8.713s 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.951 ************************************ 00:16:18.951 END TEST nvmf_fio_target 00:16:18.951 ************************************ 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:18.951 ************************************ 00:16:18.951 START TEST nvmf_bdevio 00:16:18.951 ************************************ 00:16:18.951 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:18.951 * Looking for test storage... 00:16:19.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.210 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.211 --rc genhtml_branch_coverage=1 00:16:19.211 --rc genhtml_function_coverage=1 00:16:19.211 --rc genhtml_legend=1 00:16:19.211 --rc geninfo_all_blocks=1 00:16:19.211 --rc geninfo_unexecuted_blocks=1 00:16:19.211 00:16:19.211 ' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.211 --rc genhtml_branch_coverage=1 00:16:19.211 --rc genhtml_function_coverage=1 00:16:19.211 --rc genhtml_legend=1 00:16:19.211 --rc geninfo_all_blocks=1 00:16:19.211 --rc geninfo_unexecuted_blocks=1 00:16:19.211 00:16:19.211 ' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.211 --rc genhtml_branch_coverage=1 00:16:19.211 --rc genhtml_function_coverage=1 00:16:19.211 --rc genhtml_legend=1 00:16:19.211 --rc geninfo_all_blocks=1 00:16:19.211 --rc geninfo_unexecuted_blocks=1 00:16:19.211 00:16:19.211 ' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.211 --rc genhtml_branch_coverage=1 00:16:19.211 --rc genhtml_function_coverage=1 00:16:19.211 --rc genhtml_legend=1 00:16:19.211 --rc geninfo_all_blocks=1 00:16:19.211 --rc geninfo_unexecuted_blocks=1 00:16:19.211 00:16:19.211 ' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:19.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:16:19.211 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:22.495 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:22.495 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:22.495 Found net devices under 0000:84:00.0: cvl_0_0 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:22.495 Found net devices under 0000:84:00.1: cvl_0_1 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.495 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.496 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:16:22.496 00:16:22.496 --- 10.0.0.2 ping statistics --- 00:16:22.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.496 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:16:22.496 00:16:22.496 --- 10.0.0.1 ping statistics --- 00:16:22.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.496 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2029445 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2029445 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2029445 ']' 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.496 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 [2024-12-09 10:27:07.164037] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:16:22.755 [2024-12-09 10:27:07.164155] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.755 [2024-12-09 10:27:07.255892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.756 [2024-12-09 10:27:07.313175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.756 [2024-12-09 10:27:07.313230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.756 [2024-12-09 10:27:07.313258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.756 [2024-12-09 10:27:07.313269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.756 [2024-12-09 10:27:07.313279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.756 [2024-12-09 10:27:07.314964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:22.756 [2024-12-09 10:27:07.315012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:22.756 [2024-12-09 10:27:07.315087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:22.756 [2024-12-09 10:27:07.315090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 [2024-12-09 10:27:07.510442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 Malloc0 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.013 [2024-12-09 10:27:07.573186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:23.013 { 00:16:23.013 "params": { 00:16:23.013 "name": "Nvme$subsystem", 00:16:23.013 "trtype": "$TEST_TRANSPORT", 00:16:23.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.013 "adrfam": "ipv4", 00:16:23.013 "trsvcid": "$NVMF_PORT", 00:16:23.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.013 "hdgst": ${hdgst:-false}, 00:16:23.013 "ddgst": ${ddgst:-false} 00:16:23.013 }, 00:16:23.013 "method": "bdev_nvme_attach_controller" 00:16:23.013 } 00:16:23.013 EOF 00:16:23.013 )") 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:16:23.013 10:27:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:23.013 "params": { 00:16:23.013 "name": "Nvme1", 00:16:23.013 "trtype": "tcp", 00:16:23.013 "traddr": "10.0.0.2", 00:16:23.013 "adrfam": "ipv4", 00:16:23.013 "trsvcid": "4420", 00:16:23.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.013 "hdgst": false, 00:16:23.013 "ddgst": false 00:16:23.013 }, 00:16:23.013 "method": "bdev_nvme_attach_controller" 00:16:23.013 }' 00:16:23.013 [2024-12-09 10:27:07.651669] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:16:23.013 [2024-12-09 10:27:07.651795] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029492 ] 00:16:23.274 [2024-12-09 10:27:07.739683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.274 [2024-12-09 10:27:07.802651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.274 [2024-12-09 10:27:07.802703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.274 [2024-12-09 10:27:07.802706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.537 I/O targets: 00:16:23.537 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:23.537 00:16:23.537 00:16:23.537 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.537 http://cunit.sourceforge.net/ 00:16:23.537 00:16:23.537 00:16:23.537 Suite: bdevio tests on: Nvme1n1 00:16:23.537 Test: blockdev write read block ...passed 00:16:23.537 Test: blockdev write zeroes read block ...passed 00:16:23.537 Test: blockdev write zeroes read no split ...passed 00:16:23.537 Test: blockdev write zeroes read split ...passed 00:16:23.537 Test: blockdev write zeroes read split partial ...passed 00:16:23.537 Test: blockdev reset ...[2024-12-09 10:27:08.105317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:23.537 [2024-12-09 10:27:08.105433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225ca70 (9): Bad file descriptor 00:16:23.537 [2024-12-09 10:27:08.175006] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:23.537 passed 00:16:23.795 Test: blockdev write read 8 blocks ...passed 00:16:23.795 Test: blockdev write read size > 128k ...passed 00:16:23.795 Test: blockdev write read invalid size ...passed 00:16:23.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:23.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:23.795 Test: blockdev write read max offset ...passed 00:16:23.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:23.795 Test: blockdev writev readv 8 blocks ...passed 00:16:23.795 Test: blockdev writev readv 30 x 1block ...passed 00:16:23.795 Test: blockdev writev readv block ...passed 00:16:23.795 Test: blockdev writev readv size > 128k ...passed 00:16:23.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:23.795 Test: blockdev comparev and writev ...[2024-12-09 10:27:08.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.427530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.427556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.427574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.428044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.428444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.428500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.428924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:23.795 [2024-12-09 10:27:08.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:23.795 [2024-12-09 10:27:08.428993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.052 passed 00:16:24.052 Test: blockdev nvme passthru rw ...passed 00:16:24.053 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:27:08.512080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.053 [2024-12-09 10:27:08.512111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.053 [2024-12-09 10:27:08.512338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.053 [2024-12-09 10:27:08.512367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:24.053 [2024-12-09 10:27:08.512532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.053 [2024-12-09 10:27:08.512556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.053 [2024-12-09 10:27:08.512697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:24.053 [2024-12-09 10:27:08.512727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:24.053 passed 00:16:24.053 Test: blockdev nvme admin passthru ...passed 00:16:24.053 Test: blockdev copy ...passed 00:16:24.053 00:16:24.053 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.053 suites 1 1 n/a 0 0 00:16:24.053 tests 23 23 23 0 0 00:16:24.053 asserts 152 152 152 0 n/a 00:16:24.053 00:16:24.053 Elapsed time = 1.163 seconds 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.311 rmmod nvme_tcp 00:16:24.311 rmmod nvme_fabrics 00:16:24.311 rmmod nvme_keyring 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2029445 ']' 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2029445 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2029445 ']' 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2029445 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029445 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029445' 00:16:24.311 killing process with pid 2029445 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2029445 00:16:24.311 10:27:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2029445 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.572 10:27:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:27.149 00:16:27.149 real 0m7.677s 00:16:27.149 user 0m10.642s 00:16:27.149 sys 0m3.061s 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 ************************************ 00:16:27.149 END TEST nvmf_bdevio 00:16:27.149 ************************************ 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:27.149 00:16:27.149 real 4m44.121s 00:16:27.149 user 11m55.374s 00:16:27.149 sys 1m26.334s 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 ************************************ 00:16:27.149 END TEST nvmf_target_core 00:16:27.149 ************************************ 00:16:27.149 10:27:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:27.149 10:27:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.149 10:27:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.149 10:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.149 ************************************ 00:16:27.149 START TEST nvmf_target_extra 00:16:27.149 ************************************ 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:16:27.149 * Looking for test storage... 00:16:27.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:16:27.149 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.150 --rc genhtml_branch_coverage=1 00:16:27.150 --rc genhtml_function_coverage=1 00:16:27.150 --rc genhtml_legend=1 00:16:27.150 --rc geninfo_all_blocks=1 00:16:27.150 --rc geninfo_unexecuted_blocks=1 00:16:27.150 00:16:27.150 ' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.150 --rc genhtml_branch_coverage=1 00:16:27.150 --rc genhtml_function_coverage=1 00:16:27.150 --rc genhtml_legend=1 00:16:27.150 --rc geninfo_all_blocks=1 00:16:27.150 --rc geninfo_unexecuted_blocks=1 00:16:27.150 00:16:27.150 ' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.150 --rc genhtml_branch_coverage=1 00:16:27.150 --rc genhtml_function_coverage=1 00:16:27.150 --rc genhtml_legend=1 00:16:27.150 --rc geninfo_all_blocks=1 00:16:27.150 --rc geninfo_unexecuted_blocks=1 00:16:27.150 00:16:27.150 ' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:27.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.150 --rc genhtml_branch_coverage=1 00:16:27.150 --rc genhtml_function_coverage=1 00:16:27.150 --rc genhtml_legend=1 00:16:27.150 --rc geninfo_all_blocks=1 00:16:27.150 --rc geninfo_unexecuted_blocks=1 00:16:27.150 00:16:27.150 ' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.150 ************************************ 00:16:27.150 START TEST nvmf_example 00:16:27.150 ************************************ 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:27.150 * Looking for test storage... 00:16:27.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:16:27.150 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:16:27.410 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.411 --rc genhtml_branch_coverage=1 00:16:27.411 --rc genhtml_function_coverage=1 00:16:27.411 --rc genhtml_legend=1 00:16:27.411 --rc geninfo_all_blocks=1 00:16:27.411 --rc geninfo_unexecuted_blocks=1 00:16:27.411 00:16:27.411 ' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.411 --rc genhtml_branch_coverage=1 00:16:27.411 --rc genhtml_function_coverage=1 00:16:27.411 --rc genhtml_legend=1 00:16:27.411 --rc geninfo_all_blocks=1 00:16:27.411 --rc geninfo_unexecuted_blocks=1 00:16:27.411 00:16:27.411 ' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.411 --rc genhtml_branch_coverage=1 00:16:27.411 --rc genhtml_function_coverage=1 00:16:27.411 --rc genhtml_legend=1 00:16:27.411 --rc geninfo_all_blocks=1 00:16:27.411 --rc geninfo_unexecuted_blocks=1 00:16:27.411 00:16:27.411 ' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:27.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.411 --rc genhtml_branch_coverage=1 00:16:27.411 --rc genhtml_function_coverage=1 00:16:27.411 --rc genhtml_legend=1 00:16:27.411 --rc geninfo_all_blocks=1 00:16:27.411 --rc geninfo_unexecuted_blocks=1 00:16:27.411 00:16:27.411 ' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.411 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.412 10:27:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:30.706 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:30.706 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:30.706 Found net devices under 0000:84:00.0: cvl_0_0 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:30.706 Found net devices under 0000:84:00.1: cvl_0_1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:16:30.706 00:16:30.706 --- 10.0.0.2 ping statistics --- 00:16:30.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.706 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:16:30.706 00:16:30.706 --- 10.0.0.1 ping statistics --- 00:16:30.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.706 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2032283 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2032283 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2032283 ']' 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.706 10:27:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.963 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:31.221 10:27:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:43.417 Initializing NVMe Controllers 00:16:43.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:43.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:43.417 Initialization complete. Launching workers. 00:16:43.417 ======================================================== 00:16:43.417 Latency(us) 00:16:43.417 Device Information : IOPS MiB/s Average min max 00:16:43.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14711.20 57.47 4350.15 880.64 16374.05 00:16:43.417 ======================================================== 00:16:43.417 Total : 14711.20 57.47 4350.15 880.64 16374.05 00:16:43.417 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.417 rmmod nvme_tcp 00:16:43.417 rmmod nvme_fabrics 00:16:43.417 rmmod nvme_keyring 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2032283 ']' 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2032283 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2032283 ']' 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2032283 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.417 10:27:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2032283 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2032283' 00:16:43.417 killing process with pid 2032283 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2032283 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2032283 00:16:43.417 nvmf threads initialize successfully 00:16:43.417 bdev subsystem init successfully 00:16:43.417 created a nvmf target service 00:16:43.417 create targets's poll groups done 00:16:43.417 all subsystems of target started 00:16:43.417 nvmf target is running 00:16:43.417 all subsystems of target stopped 00:16:43.417 destroy targets's poll groups done 00:16:43.417 destroyed the nvmf target service 00:16:43.417 bdev subsystem finish successfully 00:16:43.417 nvmf threads destroy successfully 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:16:43.417 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.418 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:43.986 00:16:43.986 real 0m16.790s 00:16:43.986 user 0m43.670s 00:16:43.986 sys 0m4.374s 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:43.986 ************************************ 00:16:43.986 END TEST nvmf_example 00:16:43.986 ************************************ 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.986 ************************************ 00:16:43.986 START TEST nvmf_filesystem 00:16:43.986 ************************************ 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:43.986 * Looking for test storage... 00:16:43.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:43.986 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:44.248 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.249 --rc genhtml_branch_coverage=1 00:16:44.249 --rc genhtml_function_coverage=1 00:16:44.249 --rc genhtml_legend=1 00:16:44.249 --rc geninfo_all_blocks=1 00:16:44.249 --rc geninfo_unexecuted_blocks=1 00:16:44.249 00:16:44.249 ' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.249 --rc genhtml_branch_coverage=1 00:16:44.249 --rc genhtml_function_coverage=1 00:16:44.249 --rc genhtml_legend=1 00:16:44.249 --rc geninfo_all_blocks=1 00:16:44.249 --rc geninfo_unexecuted_blocks=1 00:16:44.249 00:16:44.249 ' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.249 --rc genhtml_branch_coverage=1 00:16:44.249 --rc genhtml_function_coverage=1 00:16:44.249 --rc genhtml_legend=1 00:16:44.249 --rc geninfo_all_blocks=1 00:16:44.249 --rc geninfo_unexecuted_blocks=1 00:16:44.249 00:16:44.249 ' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.249 --rc genhtml_branch_coverage=1 00:16:44.249 --rc genhtml_function_coverage=1 00:16:44.249 --rc genhtml_legend=1 00:16:44.249 --rc geninfo_all_blocks=1 00:16:44.249 --rc geninfo_unexecuted_blocks=1 00:16:44.249 00:16:44.249 ' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:44.249 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:44.250 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:44.250 #define SPDK_CONFIG_H 00:16:44.250 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:44.250 #define SPDK_CONFIG_APPS 1 00:16:44.250 #define SPDK_CONFIG_ARCH native 00:16:44.250 #undef SPDK_CONFIG_ASAN 00:16:44.250 #undef SPDK_CONFIG_AVAHI 00:16:44.250 #undef SPDK_CONFIG_CET 00:16:44.250 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:44.250 #define SPDK_CONFIG_COVERAGE 1 00:16:44.250 #define SPDK_CONFIG_CROSS_PREFIX 00:16:44.250 #undef SPDK_CONFIG_CRYPTO 00:16:44.250 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:44.250 #undef SPDK_CONFIG_CUSTOMOCF 00:16:44.250 #undef SPDK_CONFIG_DAOS 00:16:44.250 #define SPDK_CONFIG_DAOS_DIR 00:16:44.250 #define SPDK_CONFIG_DEBUG 1 00:16:44.250 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:44.250 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:44.250 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:44.250 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:44.250 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:44.250 #undef SPDK_CONFIG_DPDK_UADK 00:16:44.250 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:44.250 #define SPDK_CONFIG_EXAMPLES 1 00:16:44.250 #undef SPDK_CONFIG_FC 00:16:44.250 #define SPDK_CONFIG_FC_PATH 00:16:44.250 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:44.250 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:44.250 #define SPDK_CONFIG_FSDEV 1 00:16:44.250 #undef SPDK_CONFIG_FUSE 00:16:44.250 #undef SPDK_CONFIG_FUZZER 00:16:44.250 #define SPDK_CONFIG_FUZZER_LIB 00:16:44.250 #undef SPDK_CONFIG_GOLANG 00:16:44.250 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:44.250 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:44.250 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:44.250 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:44.250 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:44.250 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:44.250 #undef SPDK_CONFIG_HAVE_LZ4 00:16:44.250 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:44.250 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:44.250 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:44.250 #define SPDK_CONFIG_IDXD 1 00:16:44.250 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:44.250 #undef SPDK_CONFIG_IPSEC_MB 00:16:44.250 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:44.250 #define SPDK_CONFIG_ISAL 1 00:16:44.250 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:44.250 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:44.250 #define SPDK_CONFIG_LIBDIR 00:16:44.250 #undef SPDK_CONFIG_LTO 00:16:44.250 #define SPDK_CONFIG_MAX_LCORES 128 00:16:44.250 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:44.250 #define SPDK_CONFIG_NVME_CUSE 1 00:16:44.250 #undef SPDK_CONFIG_OCF 00:16:44.250 #define SPDK_CONFIG_OCF_PATH 00:16:44.250 #define SPDK_CONFIG_OPENSSL_PATH 00:16:44.250 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:44.250 #define SPDK_CONFIG_PGO_DIR 00:16:44.250 #undef SPDK_CONFIG_PGO_USE 00:16:44.250 #define SPDK_CONFIG_PREFIX /usr/local 00:16:44.250 #undef SPDK_CONFIG_RAID5F 00:16:44.250 #undef SPDK_CONFIG_RBD 00:16:44.250 #define SPDK_CONFIG_RDMA 1 00:16:44.250 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:44.250 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:44.250 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:44.250 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:44.250 #define SPDK_CONFIG_SHARED 1 00:16:44.250 #undef SPDK_CONFIG_SMA 00:16:44.250 #define SPDK_CONFIG_TESTS 1 00:16:44.250 #undef SPDK_CONFIG_TSAN 00:16:44.250 #define SPDK_CONFIG_UBLK 1 00:16:44.250 #define SPDK_CONFIG_UBSAN 1 00:16:44.250 #undef SPDK_CONFIG_UNIT_TESTS 00:16:44.250 #undef SPDK_CONFIG_URING 00:16:44.250 #define SPDK_CONFIG_URING_PATH 00:16:44.250 #undef SPDK_CONFIG_URING_ZNS 00:16:44.250 #undef SPDK_CONFIG_USDT 00:16:44.250 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:44.250 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:44.250 #define SPDK_CONFIG_VFIO_USER 1 00:16:44.250 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:44.250 #define SPDK_CONFIG_VHOST 1 00:16:44.250 #define SPDK_CONFIG_VIRTIO 1 00:16:44.250 #undef SPDK_CONFIG_VTUNE 00:16:44.251 #define SPDK_CONFIG_VTUNE_DIR 00:16:44.251 #define SPDK_CONFIG_WERROR 1 00:16:44.251 #define SPDK_CONFIG_WPDK_DIR 00:16:44.251 #undef SPDK_CONFIG_XNVME 00:16:44.251 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:44.251 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:44.252 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2033968 ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2033968 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.OfEyR4 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OfEyR4/tests/target /tmp/spdk.OfEyR4 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:16:44.253 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39117971456 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=45077106688 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5959135232 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22527184896 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538551296 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=8993034240 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9015422976 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22388736 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=22538072064 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=22538555392 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=483328 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4507697152 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4507709440 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:44.254 * Looking for test storage... 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=39117971456 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8173727744 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:44.254 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:44.514 10:27:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.514 --rc genhtml_branch_coverage=1 00:16:44.514 --rc genhtml_function_coverage=1 00:16:44.514 --rc genhtml_legend=1 00:16:44.514 --rc geninfo_all_blocks=1 00:16:44.514 --rc geninfo_unexecuted_blocks=1 00:16:44.514 00:16:44.514 ' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.514 --rc genhtml_branch_coverage=1 00:16:44.514 --rc genhtml_function_coverage=1 00:16:44.514 --rc genhtml_legend=1 00:16:44.514 --rc geninfo_all_blocks=1 00:16:44.514 --rc geninfo_unexecuted_blocks=1 00:16:44.514 00:16:44.514 ' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.514 --rc genhtml_branch_coverage=1 00:16:44.514 --rc genhtml_function_coverage=1 00:16:44.514 --rc genhtml_legend=1 00:16:44.514 --rc geninfo_all_blocks=1 00:16:44.514 --rc geninfo_unexecuted_blocks=1 00:16:44.514 00:16:44.514 ' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:44.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.514 --rc genhtml_branch_coverage=1 00:16:44.514 --rc genhtml_function_coverage=1 00:16:44.514 --rc genhtml_legend=1 00:16:44.514 --rc geninfo_all_blocks=1 00:16:44.514 --rc geninfo_unexecuted_blocks=1 00:16:44.514 00:16:44.514 ' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.514 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.515 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:47.798 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.798 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:47.799 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:47.799 Found net devices under 0000:84:00.0: cvl_0_0 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:47.799 Found net devices under 0000:84:00.1: cvl_0_1 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.799 10:27:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:47.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:16:47.799 00:16:47.799 --- 10.0.0.2 ping statistics --- 00:16:47.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.799 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:16:47.799 00:16:47.799 --- 10.0.0.1 ping statistics --- 00:16:47.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.799 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.799 ************************************ 00:16:47.799 START TEST nvmf_filesystem_no_in_capsule 00:16:47.799 ************************************ 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2035755 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2035755 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2035755 ']' 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.799 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:47.799 [2024-12-09 10:27:32.244411] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:16:47.799 [2024-12-09 10:27:32.244520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.799 [2024-12-09 10:27:32.383445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.059 [2024-12-09 10:27:32.501764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.059 [2024-12-09 10:27:32.501883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.059 [2024-12-09 10:27:32.501921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.059 [2024-12-09 10:27:32.501952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.059 [2024-12-09 10:27:32.501978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.059 [2024-12-09 10:27:32.505460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.059 [2024-12-09 10:27:32.505556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.059 [2024-12-09 10:27:32.505654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.059 [2024-12-09 10:27:32.505658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.059 [2024-12-09 10:27:32.671298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.059 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.317 Malloc1 00:16:48.317 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.317 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.317 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.318 [2024-12-09 10:27:32.885598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:48.318 { 00:16:48.318 "name": "Malloc1", 00:16:48.318 "aliases": [ 00:16:48.318 "20cb8bfb-638e-4000-809b-b8d289c1bf21" 00:16:48.318 ], 00:16:48.318 "product_name": "Malloc disk", 00:16:48.318 "block_size": 512, 00:16:48.318 "num_blocks": 1048576, 00:16:48.318 "uuid": "20cb8bfb-638e-4000-809b-b8d289c1bf21", 00:16:48.318 "assigned_rate_limits": { 00:16:48.318 "rw_ios_per_sec": 0, 00:16:48.318 "rw_mbytes_per_sec": 0, 00:16:48.318 "r_mbytes_per_sec": 0, 00:16:48.318 "w_mbytes_per_sec": 0 00:16:48.318 }, 00:16:48.318 "claimed": true, 00:16:48.318 "claim_type": "exclusive_write", 00:16:48.318 "zoned": false, 00:16:48.318 "supported_io_types": { 00:16:48.318 "read": true, 00:16:48.318 "write": true, 00:16:48.318 "unmap": true, 00:16:48.318 "flush": true, 00:16:48.318 "reset": true, 00:16:48.318 "nvme_admin": false, 00:16:48.318 "nvme_io": false, 00:16:48.318 "nvme_io_md": false, 00:16:48.318 "write_zeroes": true, 00:16:48.318 "zcopy": true, 00:16:48.318 "get_zone_info": false, 00:16:48.318 "zone_management": false, 00:16:48.318 "zone_append": false, 00:16:48.318 "compare": false, 00:16:48.318 "compare_and_write": false, 00:16:48.318 "abort": true, 00:16:48.318 "seek_hole": false, 00:16:48.318 "seek_data": false, 00:16:48.318 "copy": true, 00:16:48.318 "nvme_iov_md": false 00:16:48.318 }, 00:16:48.318 "memory_domains": [ 00:16:48.318 { 00:16:48.318 "dma_device_id": "system", 00:16:48.318 "dma_device_type": 1 00:16:48.318 }, 00:16:48.318 { 00:16:48.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.318 "dma_device_type": 2 00:16:48.318 } 00:16:48.318 ], 00:16:48.318 "driver_specific": {} 00:16:48.318 } 00:16:48.318 ]' 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:48.318 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:48.577 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:48.577 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:48.577 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:48.577 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:48.577 10:27:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.145 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.145 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.145 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.145 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.146 10:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:51.050 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:51.309 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:51.309 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:51.309 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:51.875 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:52.812 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:52.812 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:52.812 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:52.812 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.812 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:53.070 ************************************ 00:16:53.070 START TEST filesystem_ext4 00:16:53.070 ************************************ 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:53.070 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:53.070 mke2fs 1.47.0 (5-Feb-2023) 00:16:53.070 Discarding device blocks: 0/522240 done 00:16:53.070 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:53.070 Filesystem UUID: 7d42303c-545c-426d-a8f3-103ba5c16b52 00:16:53.070 Superblock backups stored on blocks: 00:16:53.070 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:53.070 00:16:53.070 Allocating group tables: 0/64 done 00:16:53.070 Writing inode tables: 0/64 done 00:16:53.327 Creating journal (8192 blocks): done 00:16:53.327 Writing superblocks and filesystem accounting information: 0/64 done 00:16:53.327 00:16:53.327 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:53.327 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:58.586 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:58.586 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2035755 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:58.586 00:16:58.586 real 0m5.577s 00:16:58.586 user 0m0.016s 00:16:58.586 sys 0m0.073s 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 ************************************ 00:16:58.586 END TEST filesystem_ext4 00:16:58.586 ************************************ 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 ************************************ 00:16:58.586 START TEST filesystem_btrfs 00:16:58.586 ************************************ 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:58.586 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:58.844 btrfs-progs v6.8.1 00:16:58.844 See https://btrfs.readthedocs.io for more information. 00:16:58.844 00:16:58.844 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:58.844 NOTE: several default settings have changed in version 5.15, please make sure 00:16:58.844 this does not affect your deployments: 00:16:58.844 - DUP for metadata (-m dup) 00:16:58.844 - enabled no-holes (-O no-holes) 00:16:58.844 - enabled free-space-tree (-R free-space-tree) 00:16:58.844 00:16:58.844 Label: (null) 00:16:58.844 UUID: f5b4d0f8-fb25-4ac2-8aca-0f56e144ccc1 00:16:58.844 Node size: 16384 00:16:58.844 Sector size: 4096 (CPU page size: 4096) 00:16:58.844 Filesystem size: 510.00MiB 00:16:58.844 Block group profiles: 00:16:58.844 Data: single 8.00MiB 00:16:58.844 Metadata: DUP 32.00MiB 00:16:58.844 System: DUP 8.00MiB 00:16:58.844 SSD detected: yes 00:16:58.844 Zoned device: no 00:16:58.844 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:58.844 Checksum: crc32c 00:16:58.844 Number of devices: 1 00:16:58.844 Devices: 00:16:58.844 ID SIZE PATH 00:16:58.844 1 510.00MiB /dev/nvme0n1p1 00:16:58.844 00:16:58.844 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:58.844 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2035755 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:59.410 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:59.410 00:16:59.410 real 0m0.881s 00:16:59.410 user 0m0.015s 00:16:59.410 sys 0m0.112s 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:59.410 ************************************ 00:16:59.410 END TEST filesystem_btrfs 00:16:59.410 ************************************ 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:59.410 ************************************ 00:16:59.410 START TEST filesystem_xfs 00:16:59.410 ************************************ 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:59.410 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:59.669 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:59.669 = sectsz=512 attr=2, projid32bit=1 00:16:59.669 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:59.669 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:59.669 data = bsize=4096 blocks=130560, imaxpct=25 00:16:59.669 = sunit=0 swidth=0 blks 00:16:59.669 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:59.669 log =internal log bsize=4096 blocks=16384, version=2 00:16:59.669 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:59.669 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:00.601 Discarding blocks...Done. 00:17:00.601 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:17:00.601 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2035755 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:02.495 00:17:02.495 real 0m2.818s 00:17:02.495 user 0m0.024s 00:17:02.495 sys 0m0.060s 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 ************************************ 00:17:02.495 END TEST filesystem_xfs 00:17:02.495 ************************************ 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:02.495 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2035755 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2035755 ']' 00:17:02.495 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2035755 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035755 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035755' 00:17:02.496 killing process with pid 2035755 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2035755 00:17:02.496 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2035755 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:03.061 00:17:03.061 real 0m15.497s 00:17:03.061 user 0m59.235s 00:17:03.061 sys 0m2.226s 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.061 ************************************ 00:17:03.061 END TEST nvmf_filesystem_no_in_capsule 00:17:03.061 ************************************ 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.061 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:03.319 ************************************ 00:17:03.319 START TEST nvmf_filesystem_in_capsule 00:17:03.319 ************************************ 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2037710 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2037710 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2037710 ']' 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.319 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.320 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.320 [2024-12-09 10:27:47.827365] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:17:03.320 [2024-12-09 10:27:47.827483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.320 [2024-12-09 10:27:47.918605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.577 [2024-12-09 10:27:47.986859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.577 [2024-12-09 10:27:47.986925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.577 [2024-12-09 10:27:47.986945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.577 [2024-12-09 10:27:47.986959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.577 [2024-12-09 10:27:47.986971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.577 [2024-12-09 10:27:47.988847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.577 [2024-12-09 10:27:47.988905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.577 [2024-12-09 10:27:47.988932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.577 [2024-12-09 10:27:47.988936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.577 [2024-12-09 10:27:48.154230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.577 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 Malloc1 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 [2024-12-09 10:27:48.352855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:03.836 { 00:17:03.836 "name": "Malloc1", 00:17:03.836 "aliases": [ 00:17:03.836 "55cd4a82-7eee-43ee-86da-792357e3f8a4" 00:17:03.836 ], 00:17:03.836 "product_name": "Malloc disk", 00:17:03.836 "block_size": 512, 00:17:03.836 "num_blocks": 1048576, 00:17:03.836 "uuid": "55cd4a82-7eee-43ee-86da-792357e3f8a4", 00:17:03.836 "assigned_rate_limits": { 00:17:03.836 "rw_ios_per_sec": 0, 00:17:03.836 "rw_mbytes_per_sec": 0, 00:17:03.836 "r_mbytes_per_sec": 0, 00:17:03.836 "w_mbytes_per_sec": 0 00:17:03.836 }, 00:17:03.836 "claimed": true, 00:17:03.836 "claim_type": "exclusive_write", 00:17:03.836 "zoned": false, 00:17:03.836 "supported_io_types": { 00:17:03.836 "read": true, 00:17:03.836 "write": true, 00:17:03.836 "unmap": true, 00:17:03.836 "flush": true, 00:17:03.836 "reset": true, 00:17:03.836 "nvme_admin": false, 00:17:03.836 "nvme_io": false, 00:17:03.836 "nvme_io_md": false, 00:17:03.836 "write_zeroes": true, 00:17:03.836 "zcopy": true, 00:17:03.836 "get_zone_info": false, 00:17:03.836 "zone_management": false, 00:17:03.836 "zone_append": false, 00:17:03.836 "compare": false, 00:17:03.836 "compare_and_write": false, 00:17:03.836 "abort": true, 00:17:03.836 "seek_hole": false, 00:17:03.836 "seek_data": false, 00:17:03.836 "copy": true, 00:17:03.836 "nvme_iov_md": false 00:17:03.836 }, 00:17:03.836 "memory_domains": [ 00:17:03.836 { 00:17:03.836 "dma_device_id": "system", 00:17:03.836 "dma_device_type": 1 00:17:03.836 }, 00:17:03.836 { 00:17:03.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.836 "dma_device_type": 2 00:17:03.836 } 00:17:03.836 ], 00:17:03.836 "driver_specific": {} 00:17:03.836 } 00:17:03.836 ]' 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:17:03.836 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:04.094 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:17:04.095 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:17:04.095 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:17:04.095 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:04.095 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:04.659 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.659 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:17:04.659 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.659 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:04.659 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:06.616 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:07.180 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:07.438 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:08.810 ************************************ 00:17:08.810 START TEST filesystem_in_capsule_ext4 00:17:08.810 ************************************ 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:17:08.810 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:08.810 mke2fs 1.47.0 (5-Feb-2023) 00:17:08.810 Discarding device blocks: 0/522240 done 00:17:08.810 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:08.810 Filesystem UUID: e0f6d7c3-1eec-4c09-b1e5-8270c7b50035 00:17:08.810 Superblock backups stored on blocks: 00:17:08.810 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:08.810 00:17:08.810 Allocating group tables: 0/64 done 00:17:08.810 Writing inode tables: 0/64 done 00:17:08.810 Creating journal (8192 blocks): done 00:17:09.743 Writing superblocks and filesystem accounting information: 0/64 done 00:17:09.743 00:17:09.743 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:17:09.743 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2037710 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:14.998 00:17:14.998 real 0m6.515s 00:17:14.998 user 0m0.017s 00:17:14.998 sys 0m0.068s 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:14.998 ************************************ 00:17:14.998 END TEST filesystem_in_capsule_ext4 00:17:14.998 ************************************ 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:14.998 ************************************ 00:17:14.998 START TEST filesystem_in_capsule_btrfs 00:17:14.998 ************************************ 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:17:14.998 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:15.565 btrfs-progs v6.8.1 00:17:15.565 See https://btrfs.readthedocs.io for more information. 00:17:15.565 00:17:15.565 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:15.565 NOTE: several default settings have changed in version 5.15, please make sure 00:17:15.565 this does not affect your deployments: 00:17:15.565 - DUP for metadata (-m dup) 00:17:15.565 - enabled no-holes (-O no-holes) 00:17:15.565 - enabled free-space-tree (-R free-space-tree) 00:17:15.565 00:17:15.565 Label: (null) 00:17:15.565 UUID: 07c2faab-add2-496a-8387-2ed25375cea7 00:17:15.565 Node size: 16384 00:17:15.565 Sector size: 4096 (CPU page size: 4096) 00:17:15.565 Filesystem size: 510.00MiB 00:17:15.565 Block group profiles: 00:17:15.565 Data: single 8.00MiB 00:17:15.565 Metadata: DUP 32.00MiB 00:17:15.565 System: DUP 8.00MiB 00:17:15.565 SSD detected: yes 00:17:15.565 Zoned device: no 00:17:15.565 Features: extref, skinny-metadata, no-holes, free-space-tree 00:17:15.565 Checksum: crc32c 00:17:15.565 Number of devices: 1 00:17:15.565 Devices: 00:17:15.565 ID SIZE PATH 00:17:15.565 1 510.00MiB /dev/nvme0n1p1 00:17:15.565 00:17:15.565 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:17:15.565 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2037710 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:15.823 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:16.082 00:17:16.082 real 0m0.838s 00:17:16.082 user 0m0.025s 00:17:16.082 sys 0m0.105s 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:16.082 ************************************ 00:17:16.082 END TEST filesystem_in_capsule_btrfs 00:17:16.082 ************************************ 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:16.082 ************************************ 00:17:16.082 START TEST filesystem_in_capsule_xfs 00:17:16.082 ************************************ 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:17:16.082 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:16.082 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:16.082 = sectsz=512 attr=2, projid32bit=1 00:17:16.082 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:16.082 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:16.082 data = bsize=4096 blocks=130560, imaxpct=25 00:17:16.082 = sunit=0 swidth=0 blks 00:17:16.082 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:16.082 log =internal log bsize=4096 blocks=16384, version=2 00:17:16.082 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:16.082 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:17.027 Discarding blocks...Done. 00:17:17.028 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:17:17.028 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2037710 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:19.586 00:17:19.586 real 0m3.274s 00:17:19.586 user 0m0.025s 00:17:19.586 sys 0m0.059s 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:19.586 ************************************ 00:17:19.586 END TEST filesystem_in_capsule_xfs 00:17:19.586 ************************************ 00:17:19.586 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:19.586 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2037710 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2037710 ']' 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2037710 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2037710 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2037710' 00:17:19.844 killing process with pid 2037710 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2037710 00:17:19.844 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2037710 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:20.414 00:17:20.414 real 0m17.070s 00:17:20.414 user 1m5.822s 00:17:20.414 sys 0m2.214s 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:20.414 ************************************ 00:17:20.414 END TEST nvmf_filesystem_in_capsule 00:17:20.414 ************************************ 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.414 rmmod nvme_tcp 00:17:20.414 rmmod nvme_fabrics 00:17:20.414 rmmod nvme_keyring 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.414 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.953 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.953 00:17:22.953 real 0m38.510s 00:17:22.953 user 2m6.487s 00:17:22.953 sys 0m6.978s 00:17:22.953 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.953 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:22.953 ************************************ 00:17:22.953 END TEST nvmf_filesystem 00:17:22.953 ************************************ 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.953 ************************************ 00:17:22.953 START TEST nvmf_target_discovery 00:17:22.953 ************************************ 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:22.953 * Looking for test storage... 00:17:22.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.953 --rc genhtml_branch_coverage=1 00:17:22.953 --rc genhtml_function_coverage=1 00:17:22.953 --rc genhtml_legend=1 00:17:22.953 --rc geninfo_all_blocks=1 00:17:22.953 --rc geninfo_unexecuted_blocks=1 00:17:22.953 00:17:22.953 ' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.953 --rc genhtml_branch_coverage=1 00:17:22.953 --rc genhtml_function_coverage=1 00:17:22.953 --rc genhtml_legend=1 00:17:22.953 --rc geninfo_all_blocks=1 00:17:22.953 --rc geninfo_unexecuted_blocks=1 00:17:22.953 00:17:22.953 ' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.953 --rc genhtml_branch_coverage=1 00:17:22.953 --rc genhtml_function_coverage=1 00:17:22.953 --rc genhtml_legend=1 00:17:22.953 --rc geninfo_all_blocks=1 00:17:22.953 --rc geninfo_unexecuted_blocks=1 00:17:22.953 00:17:22.953 ' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.953 --rc genhtml_branch_coverage=1 00:17:22.953 --rc genhtml_function_coverage=1 00:17:22.953 --rc genhtml_legend=1 00:17:22.953 --rc geninfo_all_blocks=1 00:17:22.953 --rc geninfo_unexecuted_blocks=1 00:17:22.953 00:17:22.953 ' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.953 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.954 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.248 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:26.249 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:26.249 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:26.249 Found net devices under 0000:84:00.0: cvl_0_0 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:26.249 Found net devices under 0000:84:00.1: cvl_0_1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:17:26.249 00:17:26.249 --- 10.0.0.2 ping statistics --- 00:17:26.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.249 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:26.249 00:17:26.249 --- 10.0.0.1 ping statistics --- 00:17:26.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.249 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2041875 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2041875 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2041875 ']' 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.249 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.250 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.250 [2024-12-09 10:28:10.519588] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:17:26.250 [2024-12-09 10:28:10.519774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.250 [2024-12-09 10:28:10.677663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.250 [2024-12-09 10:28:10.799067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.250 [2024-12-09 10:28:10.799177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.250 [2024-12-09 10:28:10.799214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.250 [2024-12-09 10:28:10.799244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.250 [2024-12-09 10:28:10.799269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.250 [2024-12-09 10:28:10.802685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.250 [2024-12-09 10:28:10.802805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.250 [2024-12-09 10:28:10.802838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.250 [2024-12-09 10:28:10.806740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 [2024-12-09 10:28:10.981407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 Null1 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 [2024-12-09 10:28:11.043935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 Null2 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.507 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 Null3 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 Null4 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:17:26.765 00:17:26.765 Discovery Log Number of Records 6, Generation counter 6 00:17:26.765 =====Discovery Log Entry 0====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: current discovery subsystem 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4420 00:17:26.765 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: explicit discovery connections, duplicate discovery information 00:17:26.765 sectype: none 00:17:26.765 =====Discovery Log Entry 1====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: nvme subsystem 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4420 00:17:26.765 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: none 00:17:26.765 sectype: none 00:17:26.765 =====Discovery Log Entry 2====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: nvme subsystem 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4420 00:17:26.765 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: none 00:17:26.765 sectype: none 00:17:26.765 =====Discovery Log Entry 3====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: nvme subsystem 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4420 00:17:26.765 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: none 00:17:26.765 sectype: none 00:17:26.765 =====Discovery Log Entry 4====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: nvme subsystem 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4420 00:17:26.765 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: none 00:17:26.765 sectype: none 00:17:26.765 =====Discovery Log Entry 5====== 00:17:26.765 trtype: tcp 00:17:26.765 adrfam: ipv4 00:17:26.765 subtype: discovery subsystem referral 00:17:26.765 treq: not required 00:17:26.765 portid: 0 00:17:26.765 trsvcid: 4430 00:17:26.765 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:26.765 traddr: 10.0.0.2 00:17:26.765 eflags: none 00:17:26.765 sectype: none 00:17:26.765 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:26.765 Perform nvmf subsystem discovery via RPC 00:17:26.765 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:26.765 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.765 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.765 [ 00:17:26.765 { 00:17:26.765 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:26.765 "subtype": "Discovery", 00:17:26.765 "listen_addresses": [ 00:17:26.765 { 00:17:26.765 "trtype": "TCP", 00:17:26.765 "adrfam": "IPv4", 00:17:26.765 "traddr": "10.0.0.2", 00:17:26.765 "trsvcid": "4420" 00:17:26.765 } 00:17:26.765 ], 00:17:26.765 "allow_any_host": true, 00:17:26.765 "hosts": [] 00:17:26.765 }, 00:17:26.765 { 00:17:26.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.765 "subtype": "NVMe", 00:17:26.765 "listen_addresses": [ 00:17:26.765 { 00:17:26.765 "trtype": "TCP", 00:17:26.765 "adrfam": "IPv4", 00:17:26.765 "traddr": "10.0.0.2", 00:17:26.765 "trsvcid": "4420" 00:17:26.765 } 00:17:26.765 ], 00:17:26.765 "allow_any_host": true, 00:17:26.765 "hosts": [], 00:17:26.765 "serial_number": "SPDK00000000000001", 00:17:26.765 "model_number": "SPDK bdev Controller", 00:17:26.765 "max_namespaces": 32, 00:17:26.765 "min_cntlid": 1, 00:17:26.765 "max_cntlid": 65519, 00:17:26.765 "namespaces": [ 00:17:26.765 { 00:17:26.765 "nsid": 1, 00:17:26.765 "bdev_name": "Null1", 00:17:26.765 "name": "Null1", 00:17:26.765 "nguid": "D73745A683C54A818F824C016983DB5F", 00:17:26.765 "uuid": "d73745a6-83c5-4a81-8f82-4c016983db5f" 00:17:26.765 } 00:17:26.765 ] 00:17:26.765 }, 00:17:26.765 { 00:17:26.765 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:26.765 "subtype": "NVMe", 00:17:26.765 "listen_addresses": [ 00:17:26.765 { 00:17:26.765 "trtype": "TCP", 00:17:26.765 "adrfam": "IPv4", 00:17:26.765 "traddr": "10.0.0.2", 00:17:26.765 "trsvcid": "4420" 00:17:26.765 } 00:17:26.765 ], 00:17:26.765 "allow_any_host": true, 00:17:26.765 "hosts": [], 00:17:26.765 "serial_number": "SPDK00000000000002", 00:17:26.765 "model_number": "SPDK bdev Controller", 00:17:26.765 "max_namespaces": 32, 00:17:26.765 "min_cntlid": 1, 00:17:26.765 "max_cntlid": 65519, 00:17:26.765 "namespaces": [ 00:17:26.765 { 00:17:26.765 "nsid": 1, 00:17:26.765 "bdev_name": "Null2", 00:17:26.765 "name": "Null2", 00:17:26.765 "nguid": "60C3155A3B82485DA10EE40EEE5891B0", 00:17:26.765 "uuid": "60c3155a-3b82-485d-a10e-e40eee5891b0" 00:17:26.765 } 00:17:26.765 ] 00:17:26.765 }, 00:17:26.765 { 00:17:26.765 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:26.765 "subtype": "NVMe", 00:17:26.765 "listen_addresses": [ 00:17:26.765 { 00:17:26.765 "trtype": "TCP", 00:17:26.765 "adrfam": "IPv4", 00:17:26.765 "traddr": "10.0.0.2", 00:17:26.765 "trsvcid": "4420" 00:17:26.765 } 00:17:26.765 ], 00:17:26.765 "allow_any_host": true, 00:17:26.765 "hosts": [], 00:17:26.765 "serial_number": "SPDK00000000000003", 00:17:26.765 "model_number": "SPDK bdev Controller", 00:17:26.765 "max_namespaces": 32, 00:17:26.765 "min_cntlid": 1, 00:17:26.765 "max_cntlid": 65519, 00:17:26.765 "namespaces": [ 00:17:26.765 { 00:17:26.765 "nsid": 1, 00:17:26.765 "bdev_name": "Null3", 00:17:26.765 "name": "Null3", 00:17:26.765 "nguid": "761E4BA2FD3B41188F4FF399302B5223", 00:17:26.765 "uuid": "761e4ba2-fd3b-4118-8f4f-f399302b5223" 00:17:26.765 } 00:17:26.765 ] 00:17:26.765 }, 00:17:26.765 { 00:17:26.765 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:26.765 "subtype": "NVMe", 00:17:26.765 "listen_addresses": [ 00:17:26.765 { 00:17:26.765 "trtype": "TCP", 00:17:26.765 "adrfam": "IPv4", 00:17:26.765 "traddr": "10.0.0.2", 00:17:26.765 "trsvcid": "4420" 00:17:26.765 } 00:17:26.765 ], 00:17:26.766 "allow_any_host": true, 00:17:26.766 "hosts": [], 00:17:26.766 "serial_number": "SPDK00000000000004", 00:17:26.766 "model_number": "SPDK bdev Controller", 00:17:26.766 "max_namespaces": 32, 00:17:26.766 "min_cntlid": 1, 00:17:26.766 "max_cntlid": 65519, 00:17:26.766 "namespaces": [ 00:17:26.766 { 00:17:26.766 "nsid": 1, 00:17:26.766 "bdev_name": "Null4", 00:17:26.766 "name": "Null4", 00:17:26.766 "nguid": "E7880794A0604F85ACC051DBA8C6195C", 00:17:26.766 "uuid": "e7880794-a060-4f85-acc0-51dba8c6195c" 00:17:26.766 } 00:17:26.766 ] 00:17:26.766 } 00:17:26.766 ] 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.766 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.022 rmmod nvme_tcp 00:17:27.022 rmmod nvme_fabrics 00:17:27.022 rmmod nvme_keyring 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2041875 ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2041875 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2041875 ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2041875 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2041875 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2041875' 00:17:27.022 killing process with pid 2041875 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2041875 00:17:27.022 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2041875 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.592 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.516 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:29.516 00:17:29.516 real 0m6.968s 00:17:29.516 user 0m5.899s 00:17:29.516 sys 0m2.813s 00:17:29.516 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.517 ************************************ 00:17:29.517 END TEST nvmf_target_discovery 00:17:29.517 ************************************ 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.517 ************************************ 00:17:29.517 START TEST nvmf_referrals 00:17:29.517 ************************************ 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:29.517 * Looking for test storage... 00:17:29.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:17:29.517 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.779 --rc genhtml_branch_coverage=1 00:17:29.779 --rc genhtml_function_coverage=1 00:17:29.779 --rc genhtml_legend=1 00:17:29.779 --rc geninfo_all_blocks=1 00:17:29.779 --rc geninfo_unexecuted_blocks=1 00:17:29.779 00:17:29.779 ' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.779 --rc genhtml_branch_coverage=1 00:17:29.779 --rc genhtml_function_coverage=1 00:17:29.779 --rc genhtml_legend=1 00:17:29.779 --rc geninfo_all_blocks=1 00:17:29.779 --rc geninfo_unexecuted_blocks=1 00:17:29.779 00:17:29.779 ' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.779 --rc genhtml_branch_coverage=1 00:17:29.779 --rc genhtml_function_coverage=1 00:17:29.779 --rc genhtml_legend=1 00:17:29.779 --rc geninfo_all_blocks=1 00:17:29.779 --rc geninfo_unexecuted_blocks=1 00:17:29.779 00:17:29.779 ' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:29.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.779 --rc genhtml_branch_coverage=1 00:17:29.779 --rc genhtml_function_coverage=1 00:17:29.779 --rc genhtml_legend=1 00:17:29.779 --rc geninfo_all_blocks=1 00:17:29.779 --rc geninfo_unexecuted_blocks=1 00:17:29.779 00:17:29.779 ' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.779 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:17:29.780 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:33.076 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.076 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:33.077 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:33.077 Found net devices under 0000:84:00.0: cvl_0_0 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:33.077 Found net devices under 0000:84:00.1: cvl_0_1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:17:33.077 00:17:33.077 --- 10.0.0.2 ping statistics --- 00:17:33.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.077 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:17:33.077 00:17:33.077 --- 10.0.0.1 ping statistics --- 00:17:33.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.077 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2044132 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2044132 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2044132 ']' 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.077 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.077 [2024-12-09 10:28:17.403255] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:17:33.077 [2024-12-09 10:28:17.403433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.077 [2024-12-09 10:28:17.582651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.077 [2024-12-09 10:28:17.704479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.077 [2024-12-09 10:28:17.704577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.077 [2024-12-09 10:28:17.704613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.077 [2024-12-09 10:28:17.704643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.077 [2024-12-09 10:28:17.704670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.077 [2024-12-09 10:28:17.708221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.077 [2024-12-09 10:28:17.708325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.077 [2024-12-09 10:28:17.708415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.077 [2024-12-09 10:28:17.708419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 [2024-12-09 10:28:17.871308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 [2024-12-09 10:28:17.903970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.596 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:33.854 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.112 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.371 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:34.629 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:34.887 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:35.145 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.404 rmmod nvme_tcp 00:17:35.404 rmmod nvme_fabrics 00:17:35.404 rmmod nvme_keyring 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2044132 ']' 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2044132 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2044132 ']' 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2044132 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:17:35.404 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044132 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044132' 00:17:35.670 killing process with pid 2044132 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2044132 00:17:35.670 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2044132 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.931 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.837 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.837 00:17:37.837 real 0m8.365s 00:17:37.837 user 0m12.754s 00:17:37.837 sys 0m3.170s 00:17:37.837 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.837 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:37.837 ************************************ 00:17:37.837 END TEST nvmf_referrals 00:17:37.837 ************************************ 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 ************************************ 00:17:38.097 START TEST nvmf_connect_disconnect 00:17:38.097 ************************************ 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:38.097 * Looking for test storage... 00:17:38.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.097 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.358 --rc genhtml_branch_coverage=1 00:17:38.358 --rc genhtml_function_coverage=1 00:17:38.358 --rc genhtml_legend=1 00:17:38.358 --rc geninfo_all_blocks=1 00:17:38.358 --rc geninfo_unexecuted_blocks=1 00:17:38.358 00:17:38.358 ' 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.358 --rc genhtml_branch_coverage=1 00:17:38.358 --rc genhtml_function_coverage=1 00:17:38.358 --rc genhtml_legend=1 00:17:38.358 --rc geninfo_all_blocks=1 00:17:38.358 --rc geninfo_unexecuted_blocks=1 00:17:38.358 00:17:38.358 ' 00:17:38.358 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.358 --rc genhtml_branch_coverage=1 00:17:38.358 --rc genhtml_function_coverage=1 00:17:38.358 --rc genhtml_legend=1 00:17:38.358 --rc geninfo_all_blocks=1 00:17:38.359 --rc geninfo_unexecuted_blocks=1 00:17:38.359 00:17:38.359 ' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:38.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.359 --rc genhtml_branch_coverage=1 00:17:38.359 --rc genhtml_function_coverage=1 00:17:38.359 --rc genhtml_legend=1 00:17:38.359 --rc geninfo_all_blocks=1 00:17:38.359 --rc geninfo_unexecuted_blocks=1 00:17:38.359 00:17:38.359 ' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.359 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.665 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:41.666 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:41.666 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:41.666 Found net devices under 0000:84:00.0: cvl_0_0 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:41.666 Found net devices under 0000:84:00.1: cvl_0_1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:41.666 00:17:41.666 --- 10.0.0.2 ping statistics --- 00:17:41.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.666 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:17:41.666 00:17:41.666 --- 10.0.0.1 ping statistics --- 00:17:41.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.666 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2046592 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2046592 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2046592 ']' 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.666 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.667 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.667 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.667 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.667 [2024-12-09 10:28:25.913460] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:17:41.667 [2024-12-09 10:28:25.913570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.667 [2024-12-09 10:28:26.051652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.667 [2024-12-09 10:28:26.167970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.667 [2024-12-09 10:28:26.168073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.667 [2024-12-09 10:28:26.168110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.667 [2024-12-09 10:28:26.168140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.667 [2024-12-09 10:28:26.168166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.667 [2024-12-09 10:28:26.171673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.667 [2024-12-09 10:28:26.171810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.667 [2024-12-09 10:28:26.171848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.667 [2024-12-09 10:28:26.171852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.667 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.667 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:17:41.667 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.667 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.667 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 [2024-12-09 10:28:26.343815] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:41.926 [2024-12-09 10:28:26.428128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:41.926 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:45.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.061 rmmod nvme_tcp 00:17:56.061 rmmod nvme_fabrics 00:17:56.061 rmmod nvme_keyring 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.061 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2046592 ']' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2046592 ']' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046592' 00:17:56.062 killing process with pid 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2046592 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.062 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:58.603 00:17:58.603 real 0m20.177s 00:17:58.603 user 0m57.625s 00:17:58.603 sys 0m4.322s 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:58.603 ************************************ 00:17:58.603 END TEST nvmf_connect_disconnect 00:17:58.603 ************************************ 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.603 ************************************ 00:17:58.603 START TEST nvmf_multitarget 00:17:58.603 ************************************ 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:58.603 * Looking for test storage... 00:17:58.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.603 --rc genhtml_branch_coverage=1 00:17:58.603 --rc genhtml_function_coverage=1 00:17:58.603 --rc genhtml_legend=1 00:17:58.603 --rc geninfo_all_blocks=1 00:17:58.603 --rc geninfo_unexecuted_blocks=1 00:17:58.603 00:17:58.603 ' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.603 --rc genhtml_branch_coverage=1 00:17:58.603 --rc genhtml_function_coverage=1 00:17:58.603 --rc genhtml_legend=1 00:17:58.603 --rc geninfo_all_blocks=1 00:17:58.603 --rc geninfo_unexecuted_blocks=1 00:17:58.603 00:17:58.603 ' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.603 --rc genhtml_branch_coverage=1 00:17:58.603 --rc genhtml_function_coverage=1 00:17:58.603 --rc genhtml_legend=1 00:17:58.603 --rc geninfo_all_blocks=1 00:17:58.603 --rc geninfo_unexecuted_blocks=1 00:17:58.603 00:17:58.603 ' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.603 --rc genhtml_branch_coverage=1 00:17:58.603 --rc genhtml_function_coverage=1 00:17:58.603 --rc genhtml_legend=1 00:17:58.603 --rc geninfo_all_blocks=1 00:17:58.603 --rc geninfo_unexecuted_blocks=1 00:17:58.603 00:17:58.603 ' 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.603 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:58.604 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:01.147 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:01.147 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:01.147 Found net devices under 0000:84:00.0: cvl_0_0 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.147 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:01.147 Found net devices under 0000:84:00.1: cvl_0_1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:01.148 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:01.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:18:01.406 00:18:01.406 --- 10.0.0.2 ping statistics --- 00:18:01.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.406 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:18:01.406 00:18:01.406 --- 10.0.0.1 ping statistics --- 00:18:01.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.406 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2050374 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2050374 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2050374 ']' 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.406 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:01.406 [2024-12-09 10:28:45.907924] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:18:01.406 [2024-12-09 10:28:45.908018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.406 [2024-12-09 10:28:45.996787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.722 [2024-12-09 10:28:46.062648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.722 [2024-12-09 10:28:46.062707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.722 [2024-12-09 10:28:46.062732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.722 [2024-12-09 10:28:46.062748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.723 [2024-12-09 10:28:46.062760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.723 [2024-12-09 10:28:46.064545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.723 [2024-12-09 10:28:46.064597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.723 [2024-12-09 10:28:46.064621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.723 [2024-12-09 10:28:46.064629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:01.723 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:01.979 "nvmf_tgt_1" 00:18:01.979 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:02.237 "nvmf_tgt_2" 00:18:02.237 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:02.237 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:02.494 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:02.494 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:02.751 true 00:18:02.751 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:02.751 true 00:18:02.751 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:02.751 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:03.008 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.009 rmmod nvme_tcp 00:18:03.009 rmmod nvme_fabrics 00:18:03.009 rmmod nvme_keyring 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2050374 ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2050374 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2050374 ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2050374 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2050374 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2050374' 00:18:03.009 killing process with pid 2050374 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2050374 00:18:03.009 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2050374 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.268 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:05.810 00:18:05.810 real 0m7.164s 00:18:05.810 user 0m9.098s 00:18:05.810 sys 0m2.739s 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:05.810 ************************************ 00:18:05.810 END TEST nvmf_multitarget 00:18:05.810 ************************************ 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.810 10:28:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.810 ************************************ 00:18:05.810 START TEST nvmf_rpc 00:18:05.810 ************************************ 00:18:05.810 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:05.810 * Looking for test storage... 00:18:05.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.810 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:05.810 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:05.810 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.811 --rc genhtml_branch_coverage=1 00:18:05.811 --rc genhtml_function_coverage=1 00:18:05.811 --rc genhtml_legend=1 00:18:05.811 --rc geninfo_all_blocks=1 00:18:05.811 --rc geninfo_unexecuted_blocks=1 00:18:05.811 00:18:05.811 ' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.811 --rc genhtml_branch_coverage=1 00:18:05.811 --rc genhtml_function_coverage=1 00:18:05.811 --rc genhtml_legend=1 00:18:05.811 --rc geninfo_all_blocks=1 00:18:05.811 --rc geninfo_unexecuted_blocks=1 00:18:05.811 00:18:05.811 ' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.811 --rc genhtml_branch_coverage=1 00:18:05.811 --rc genhtml_function_coverage=1 00:18:05.811 --rc genhtml_legend=1 00:18:05.811 --rc geninfo_all_blocks=1 00:18:05.811 --rc geninfo_unexecuted_blocks=1 00:18:05.811 00:18:05.811 ' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:05.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.811 --rc genhtml_branch_coverage=1 00:18:05.811 --rc genhtml_function_coverage=1 00:18:05.811 --rc genhtml_legend=1 00:18:05.811 --rc geninfo_all_blocks=1 00:18:05.811 --rc geninfo_unexecuted_blocks=1 00:18:05.811 00:18:05.811 ' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.811 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.101 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:09.102 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:09.102 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:09.102 Found net devices under 0000:84:00.0: cvl_0_0 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:09.102 Found net devices under 0000:84:00.1: cvl_0_1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.102 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:09.103 00:18:09.103 --- 10.0.0.2 ping statistics --- 00:18:09.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.103 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:18:09.103 00:18:09.103 --- 10.0.0.1 ping statistics --- 00:18:09.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.103 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2052739 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2052739 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2052739 ']' 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.103 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.103 [2024-12-09 10:28:53.399773] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:18:09.103 [2024-12-09 10:28:53.399952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.103 [2024-12-09 10:28:53.586280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.103 [2024-12-09 10:28:53.706591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.103 [2024-12-09 10:28:53.706738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.103 [2024-12-09 10:28:53.706801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.103 [2024-12-09 10:28:53.706851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.103 [2024-12-09 10:28:53.706895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.103 [2024-12-09 10:28:53.710612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.103 [2024-12-09 10:28:53.710803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.103 [2024-12-09 10:28:53.710704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.103 [2024-12-09 10:28:53.710807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:09.361 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.361 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.361 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.361 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:09.361 "tick_rate": 2700000000, 00:18:09.361 "poll_groups": [ 00:18:09.361 { 00:18:09.361 "name": "nvmf_tgt_poll_group_000", 00:18:09.361 "admin_qpairs": 0, 00:18:09.361 "io_qpairs": 0, 00:18:09.361 "current_admin_qpairs": 0, 00:18:09.361 "current_io_qpairs": 0, 00:18:09.361 "pending_bdev_io": 0, 00:18:09.361 "completed_nvme_io": 0, 00:18:09.361 "transports": [] 00:18:09.361 }, 00:18:09.361 { 00:18:09.361 "name": "nvmf_tgt_poll_group_001", 00:18:09.361 "admin_qpairs": 0, 00:18:09.361 "io_qpairs": 0, 00:18:09.361 "current_admin_qpairs": 0, 00:18:09.361 "current_io_qpairs": 0, 00:18:09.361 "pending_bdev_io": 0, 00:18:09.361 "completed_nvme_io": 0, 00:18:09.361 "transports": [] 00:18:09.361 }, 00:18:09.361 { 00:18:09.361 "name": "nvmf_tgt_poll_group_002", 00:18:09.361 "admin_qpairs": 0, 00:18:09.361 "io_qpairs": 0, 00:18:09.361 "current_admin_qpairs": 0, 00:18:09.361 "current_io_qpairs": 0, 00:18:09.361 "pending_bdev_io": 0, 00:18:09.361 "completed_nvme_io": 0, 00:18:09.361 "transports": [] 00:18:09.361 }, 00:18:09.361 { 00:18:09.361 "name": "nvmf_tgt_poll_group_003", 00:18:09.361 "admin_qpairs": 0, 00:18:09.361 "io_qpairs": 0, 00:18:09.361 "current_admin_qpairs": 0, 00:18:09.361 "current_io_qpairs": 0, 00:18:09.361 "pending_bdev_io": 0, 00:18:09.361 "completed_nvme_io": 0, 00:18:09.361 "transports": [] 00:18:09.361 } 00:18:09.361 ] 00:18:09.361 }' 00:18:09.361 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:09.361 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.618 [2024-12-09 10:28:54.161155] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:09.618 "tick_rate": 2700000000, 00:18:09.618 "poll_groups": [ 00:18:09.618 { 00:18:09.618 "name": "nvmf_tgt_poll_group_000", 00:18:09.618 "admin_qpairs": 0, 00:18:09.618 "io_qpairs": 0, 00:18:09.618 "current_admin_qpairs": 0, 00:18:09.618 "current_io_qpairs": 0, 00:18:09.618 "pending_bdev_io": 0, 00:18:09.618 "completed_nvme_io": 0, 00:18:09.618 "transports": [ 00:18:09.618 { 00:18:09.618 "trtype": "TCP" 00:18:09.618 } 00:18:09.618 ] 00:18:09.618 }, 00:18:09.618 { 00:18:09.618 "name": "nvmf_tgt_poll_group_001", 00:18:09.618 "admin_qpairs": 0, 00:18:09.618 "io_qpairs": 0, 00:18:09.618 "current_admin_qpairs": 0, 00:18:09.618 "current_io_qpairs": 0, 00:18:09.618 "pending_bdev_io": 0, 00:18:09.618 "completed_nvme_io": 0, 00:18:09.618 "transports": [ 00:18:09.618 { 00:18:09.618 "trtype": "TCP" 00:18:09.618 } 00:18:09.618 ] 00:18:09.618 }, 00:18:09.618 { 00:18:09.618 "name": "nvmf_tgt_poll_group_002", 00:18:09.618 "admin_qpairs": 0, 00:18:09.618 "io_qpairs": 0, 00:18:09.618 "current_admin_qpairs": 0, 00:18:09.618 "current_io_qpairs": 0, 00:18:09.618 "pending_bdev_io": 0, 00:18:09.618 "completed_nvme_io": 0, 00:18:09.618 "transports": [ 00:18:09.618 { 00:18:09.618 "trtype": "TCP" 00:18:09.618 } 00:18:09.618 ] 00:18:09.618 }, 00:18:09.618 { 00:18:09.618 "name": "nvmf_tgt_poll_group_003", 00:18:09.618 "admin_qpairs": 0, 00:18:09.618 "io_qpairs": 0, 00:18:09.618 "current_admin_qpairs": 0, 00:18:09.618 "current_io_qpairs": 0, 00:18:09.618 "pending_bdev_io": 0, 00:18:09.618 "completed_nvme_io": 0, 00:18:09.618 "transports": [ 00:18:09.618 { 00:18:09.618 "trtype": "TCP" 00:18:09.618 } 00:18:09.618 ] 00:18:09.618 } 00:18:09.618 ] 00:18:09.618 }' 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:09.618 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:09.619 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 Malloc1 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 [2024-12-09 10:28:54.354457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:18:09.877 [2024-12-09 10:28:54.377127] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:18:09.877 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:09.877 could not add new controller: failed to write to nvme-fabrics device 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.877 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.878 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.878 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.878 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.878 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.442 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:10.442 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:10.442 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.442 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:10.442 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.981 [2024-12-09 10:28:57.196177] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:18:12.981 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:12.981 could not add new controller: failed to write to nvme-fabrics device 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.981 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:13.546 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:13.546 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:13.546 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.546 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:13.546 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:15.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:15.475 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:15.475 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 [2024-12-09 10:29:00.061205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.476 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.041 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.041 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:16.041 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.042 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:16.042 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 [2024-12-09 10:29:02.811226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.571 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.136 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:19.137 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:19.137 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.137 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:19.137 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:21.032 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 [2024-12-09 10:29:05.640407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.033 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.963 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.964 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.964 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.964 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:21.964 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.881 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 [2024-12-09 10:29:08.530285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:24.706 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:24.706 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.706 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.706 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:24.706 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:26.604 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:26.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 [2024-12-09 10:29:11.355460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.862 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:27.427 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:27.427 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.427 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.427 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:27.427 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:29.952 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:29.952 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:29.952 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 [2024-12-09 10:29:14.157175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.952 [2024-12-09 10:29:14.205219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.952 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 [2024-12-09 10:29:14.253361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 [2024-12-09 10:29:14.301542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 [2024-12-09 10:29:14.349698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.953 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:29.953 "tick_rate": 2700000000, 00:18:29.953 "poll_groups": [ 00:18:29.953 { 00:18:29.953 "name": "nvmf_tgt_poll_group_000", 00:18:29.953 "admin_qpairs": 2, 00:18:29.953 "io_qpairs": 84, 00:18:29.953 "current_admin_qpairs": 0, 00:18:29.953 "current_io_qpairs": 0, 00:18:29.953 "pending_bdev_io": 0, 00:18:29.953 "completed_nvme_io": 172, 00:18:29.953 "transports": [ 00:18:29.953 { 00:18:29.953 "trtype": "TCP" 00:18:29.953 } 00:18:29.953 ] 00:18:29.953 }, 00:18:29.953 { 00:18:29.953 "name": "nvmf_tgt_poll_group_001", 00:18:29.953 "admin_qpairs": 2, 00:18:29.953 "io_qpairs": 84, 00:18:29.953 "current_admin_qpairs": 0, 00:18:29.953 "current_io_qpairs": 0, 00:18:29.953 "pending_bdev_io": 0, 00:18:29.953 "completed_nvme_io": 96, 00:18:29.953 "transports": [ 00:18:29.953 { 00:18:29.953 "trtype": "TCP" 00:18:29.953 } 00:18:29.953 ] 00:18:29.953 }, 00:18:29.953 { 00:18:29.953 "name": "nvmf_tgt_poll_group_002", 00:18:29.953 "admin_qpairs": 1, 00:18:29.954 "io_qpairs": 84, 00:18:29.954 "current_admin_qpairs": 0, 00:18:29.954 "current_io_qpairs": 0, 00:18:29.954 "pending_bdev_io": 0, 00:18:29.954 "completed_nvme_io": 283, 00:18:29.954 "transports": [ 00:18:29.954 { 00:18:29.954 "trtype": "TCP" 00:18:29.954 } 00:18:29.954 ] 00:18:29.954 }, 00:18:29.954 { 00:18:29.954 "name": "nvmf_tgt_poll_group_003", 00:18:29.954 "admin_qpairs": 2, 00:18:29.954 "io_qpairs": 84, 00:18:29.954 "current_admin_qpairs": 0, 00:18:29.954 "current_io_qpairs": 0, 00:18:29.954 "pending_bdev_io": 0, 00:18:29.954 "completed_nvme_io": 135, 00:18:29.954 "transports": [ 00:18:29.954 { 00:18:29.954 "trtype": "TCP" 00:18:29.954 } 00:18:29.954 ] 00:18:29.954 } 00:18:29.954 ] 00:18:29.954 }' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:29.954 rmmod nvme_tcp 00:18:29.954 rmmod nvme_fabrics 00:18:29.954 rmmod nvme_keyring 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2052739 ']' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2052739 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2052739 ']' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2052739 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.954 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2052739 00:18:30.211 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.211 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.211 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2052739' 00:18:30.211 killing process with pid 2052739 00:18:30.211 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2052739 00:18:30.211 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2052739 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.471 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.380 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:32.380 00:18:32.380 real 0m27.021s 00:18:32.380 user 1m25.016s 00:18:32.380 sys 0m5.110s 00:18:32.380 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.380 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 ************************************ 00:18:32.380 END TEST nvmf_rpc 00:18:32.380 ************************************ 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.641 ************************************ 00:18:32.641 START TEST nvmf_invalid 00:18:32.641 ************************************ 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:32.641 * Looking for test storage... 00:18:32.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:32.641 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:32.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.902 --rc genhtml_branch_coverage=1 00:18:32.902 --rc genhtml_function_coverage=1 00:18:32.902 --rc genhtml_legend=1 00:18:32.902 --rc geninfo_all_blocks=1 00:18:32.902 --rc geninfo_unexecuted_blocks=1 00:18:32.902 00:18:32.902 ' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:32.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.902 --rc genhtml_branch_coverage=1 00:18:32.902 --rc genhtml_function_coverage=1 00:18:32.902 --rc genhtml_legend=1 00:18:32.902 --rc geninfo_all_blocks=1 00:18:32.902 --rc geninfo_unexecuted_blocks=1 00:18:32.902 00:18:32.902 ' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:32.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.902 --rc genhtml_branch_coverage=1 00:18:32.902 --rc genhtml_function_coverage=1 00:18:32.902 --rc genhtml_legend=1 00:18:32.902 --rc geninfo_all_blocks=1 00:18:32.902 --rc geninfo_unexecuted_blocks=1 00:18:32.902 00:18:32.902 ' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:32.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.902 --rc genhtml_branch_coverage=1 00:18:32.902 --rc genhtml_function_coverage=1 00:18:32.902 --rc genhtml_legend=1 00:18:32.902 --rc geninfo_all_blocks=1 00:18:32.902 --rc geninfo_unexecuted_blocks=1 00:18:32.902 00:18:32.902 ' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.902 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:32.903 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.200 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:36.201 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:36.201 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:36.201 Found net devices under 0000:84:00.0: cvl_0_0 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:36.201 Found net devices under 0000:84:00.1: cvl_0_1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:36.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:18:36.201 00:18:36.201 --- 10.0.0.2 ping statistics --- 00:18:36.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.201 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:18:36.201 00:18:36.201 --- 10.0.0.1 ping statistics --- 00:18:36.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.201 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2057377 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2057377 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2057377 ']' 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.201 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:36.201 [2024-12-09 10:29:20.626492] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:18:36.201 [2024-12-09 10:29:20.626588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.201 [2024-12-09 10:29:20.767094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.460 [2024-12-09 10:29:20.886974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.460 [2024-12-09 10:29:20.887066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.460 [2024-12-09 10:29:20.887139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.460 [2024-12-09 10:29:20.887189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.460 [2024-12-09 10:29:20.887233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.460 [2024-12-09 10:29:20.890764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.460 [2024-12-09 10:29:20.890920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.460 [2024-12-09 10:29:20.890830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.460 [2024-12-09 10:29:20.890924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:37.833 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22852 00:18:38.090 [2024-12-09 10:29:22.516945] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:38.090 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:38.090 { 00:18:38.090 "nqn": "nqn.2016-06.io.spdk:cnode22852", 00:18:38.090 "tgt_name": "foobar", 00:18:38.090 "method": "nvmf_create_subsystem", 00:18:38.090 "req_id": 1 00:18:38.090 } 00:18:38.090 Got JSON-RPC error response 00:18:38.090 response: 00:18:38.090 { 00:18:38.090 "code": -32603, 00:18:38.090 "message": "Unable to find target foobar" 00:18:38.090 }' 00:18:38.090 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:38.090 { 00:18:38.090 "nqn": "nqn.2016-06.io.spdk:cnode22852", 00:18:38.090 "tgt_name": "foobar", 00:18:38.090 "method": "nvmf_create_subsystem", 00:18:38.090 "req_id": 1 00:18:38.090 } 00:18:38.090 Got JSON-RPC error response 00:18:38.090 response: 00:18:38.090 { 00:18:38.090 "code": -32603, 00:18:38.090 "message": "Unable to find target foobar" 00:18:38.090 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:38.090 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:38.090 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6964 00:18:38.347 [2024-12-09 10:29:22.898235] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6964: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:38.347 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:38.347 { 00:18:38.347 "nqn": "nqn.2016-06.io.spdk:cnode6964", 00:18:38.347 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:38.347 "method": "nvmf_create_subsystem", 00:18:38.347 "req_id": 1 00:18:38.348 } 00:18:38.348 Got JSON-RPC error response 00:18:38.348 response: 00:18:38.348 { 00:18:38.348 "code": -32602, 00:18:38.348 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:38.348 }' 00:18:38.348 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:38.348 { 00:18:38.348 "nqn": "nqn.2016-06.io.spdk:cnode6964", 00:18:38.348 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:38.348 "method": "nvmf_create_subsystem", 00:18:38.348 "req_id": 1 00:18:38.348 } 00:18:38.348 Got JSON-RPC error response 00:18:38.348 response: 00:18:38.348 { 00:18:38.348 "code": -32602, 00:18:38.348 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:38.348 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:38.348 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:38.348 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8655 00:18:38.914 [2024-12-09 10:29:23.419888] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8655: invalid model number 'SPDK_Controller' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:38.914 { 00:18:38.914 "nqn": "nqn.2016-06.io.spdk:cnode8655", 00:18:38.914 "model_number": "SPDK_Controller\u001f", 00:18:38.914 "method": "nvmf_create_subsystem", 00:18:38.914 "req_id": 1 00:18:38.914 } 00:18:38.914 Got JSON-RPC error response 00:18:38.914 response: 00:18:38.914 { 00:18:38.914 "code": -32602, 00:18:38.914 "message": "Invalid MN SPDK_Controller\u001f" 00:18:38.914 }' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:38.914 { 00:18:38.914 "nqn": "nqn.2016-06.io.spdk:cnode8655", 00:18:38.914 "model_number": "SPDK_Controller\u001f", 00:18:38.914 "method": "nvmf_create_subsystem", 00:18:38.914 "req_id": 1 00:18:38.914 } 00:18:38.914 Got JSON-RPC error response 00:18:38.914 response: 00:18:38.914 { 00:18:38.914 "code": -32602, 00:18:38.914 "message": "Invalid MN SPDK_Controller\u001f" 00:18:38.914 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:38.914 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'B_,n0g|A;?L0W$(4jW3ON' 00:18:38.915 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'B_,n0g|A;?L0W$(4jW3ON' nqn.2016-06.io.spdk:cnode4020 00:18:39.480 [2024-12-09 10:29:23.953673] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4020: invalid serial number 'B_,n0g|A;?L0W$(4jW3ON' 00:18:39.480 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:39.480 { 00:18:39.480 "nqn": "nqn.2016-06.io.spdk:cnode4020", 00:18:39.480 "serial_number": "B_,n0g|A;?L0W$(4jW3ON", 00:18:39.480 "method": "nvmf_create_subsystem", 00:18:39.480 "req_id": 1 00:18:39.480 } 00:18:39.480 Got JSON-RPC error response 00:18:39.480 response: 00:18:39.480 { 00:18:39.480 "code": -32602, 00:18:39.480 "message": "Invalid SN B_,n0g|A;?L0W$(4jW3ON" 00:18:39.480 }' 00:18:39.480 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:39.480 { 00:18:39.480 "nqn": "nqn.2016-06.io.spdk:cnode4020", 00:18:39.480 "serial_number": "B_,n0g|A;?L0W$(4jW3ON", 00:18:39.480 "method": "nvmf_create_subsystem", 00:18:39.480 "req_id": 1 00:18:39.480 } 00:18:39.481 Got JSON-RPC error response 00:18:39.481 response: 00:18:39.481 { 00:18:39.481 "code": -32602, 00:18:39.481 "message": "Invalid SN B_,n0g|A;?L0W$(4jW3ON" 00:18:39.481 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:39.481 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:39.482 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bEciEp,E8]l=+&f?zY9C32?8nmU\b8Q>U+)*^<]qY' 00:18:39.741 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'bEciEp,E8]l=+&f?zY9C32?8nmU\b8Q>U+)*^<]qY' nqn.2016-06.io.spdk:cnode1446 00:18:39.998 [2024-12-09 10:29:24.499427] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1446: invalid model number 'bEciEp,E8]l=+&f?zY9C32?8nmU\b8Q>U+)*^<]qY' 00:18:39.998 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:39.998 { 00:18:39.998 "nqn": "nqn.2016-06.io.spdk:cnode1446", 00:18:39.998 "model_number": "bEciEp,E8]l=+&f?zY9C32?8nmU\\b8Q>U+)*^<]qY", 00:18:39.998 "method": "nvmf_create_subsystem", 00:18:39.998 "req_id": 1 00:18:39.998 } 00:18:39.998 Got JSON-RPC error response 00:18:39.998 response: 00:18:39.998 { 00:18:39.998 "code": -32602, 00:18:39.998 "message": "Invalid MN bEciEp,E8]l=+&f?zY9C32?8nmU\\b8Q>U+)*^<]qY" 00:18:39.998 }' 00:18:39.998 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:39.998 { 00:18:39.998 "nqn": "nqn.2016-06.io.spdk:cnode1446", 00:18:39.998 "model_number": "bEciEp,E8]l=+&f?zY9C32?8nmU\\b8Q>U+)*^<]qY", 00:18:39.998 "method": "nvmf_create_subsystem", 00:18:39.998 "req_id": 1 00:18:39.998 } 00:18:39.998 Got JSON-RPC error response 00:18:39.998 response: 00:18:39.998 { 00:18:39.998 "code": -32602, 00:18:39.998 "message": "Invalid MN bEciEp,E8]l=+&f?zY9C32?8nmU\\b8Q>U+)*^<]qY" 00:18:39.998 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:39.998 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:40.256 [2024-12-09 10:29:24.828576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.256 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:40.820 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:40.820 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:40.820 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:40.820 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:40.820 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:41.384 [2024-12-09 10:29:25.888049] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:41.384 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:41.384 { 00:18:41.384 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:41.384 "listen_address": { 00:18:41.384 "trtype": "tcp", 00:18:41.384 "traddr": "", 00:18:41.384 "trsvcid": "4421" 00:18:41.384 }, 00:18:41.384 "method": "nvmf_subsystem_remove_listener", 00:18:41.384 "req_id": 1 00:18:41.384 } 00:18:41.384 Got JSON-RPC error response 00:18:41.384 response: 00:18:41.384 { 00:18:41.384 "code": -32602, 00:18:41.384 "message": "Invalid parameters" 00:18:41.384 }' 00:18:41.384 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:41.384 { 00:18:41.384 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:41.384 "listen_address": { 00:18:41.384 "trtype": "tcp", 00:18:41.384 "traddr": "", 00:18:41.384 "trsvcid": "4421" 00:18:41.384 }, 00:18:41.384 "method": "nvmf_subsystem_remove_listener", 00:18:41.384 "req_id": 1 00:18:41.384 } 00:18:41.384 Got JSON-RPC error response 00:18:41.384 response: 00:18:41.384 { 00:18:41.384 "code": -32602, 00:18:41.384 "message": "Invalid parameters" 00:18:41.384 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:41.384 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27573 -i 0 00:18:41.641 [2024-12-09 10:29:26.192968] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27573: invalid cntlid range [0-65519] 00:18:41.641 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:41.641 { 00:18:41.641 "nqn": "nqn.2016-06.io.spdk:cnode27573", 00:18:41.641 "min_cntlid": 0, 00:18:41.641 "method": "nvmf_create_subsystem", 00:18:41.641 "req_id": 1 00:18:41.641 } 00:18:41.641 Got JSON-RPC error response 00:18:41.641 response: 00:18:41.641 { 00:18:41.641 "code": -32602, 00:18:41.641 "message": "Invalid cntlid range [0-65519]" 00:18:41.641 }' 00:18:41.641 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:41.641 { 00:18:41.641 "nqn": "nqn.2016-06.io.spdk:cnode27573", 00:18:41.641 "min_cntlid": 0, 00:18:41.641 "method": "nvmf_create_subsystem", 00:18:41.641 "req_id": 1 00:18:41.641 } 00:18:41.641 Got JSON-RPC error response 00:18:41.641 response: 00:18:41.641 { 00:18:41.641 "code": -32602, 00:18:41.641 "message": "Invalid cntlid range [0-65519]" 00:18:41.641 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:41.641 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16299 -i 65520 00:18:41.899 [2024-12-09 10:29:26.534102] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16299: invalid cntlid range [65520-65519] 00:18:42.156 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:42.156 { 00:18:42.156 "nqn": "nqn.2016-06.io.spdk:cnode16299", 00:18:42.156 "min_cntlid": 65520, 00:18:42.156 "method": "nvmf_create_subsystem", 00:18:42.156 "req_id": 1 00:18:42.156 } 00:18:42.156 Got JSON-RPC error response 00:18:42.156 response: 00:18:42.156 { 00:18:42.156 "code": -32602, 00:18:42.156 "message": "Invalid cntlid range [65520-65519]" 00:18:42.156 }' 00:18:42.156 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:42.156 { 00:18:42.156 "nqn": "nqn.2016-06.io.spdk:cnode16299", 00:18:42.156 "min_cntlid": 65520, 00:18:42.156 "method": "nvmf_create_subsystem", 00:18:42.156 "req_id": 1 00:18:42.156 } 00:18:42.156 Got JSON-RPC error response 00:18:42.156 response: 00:18:42.156 { 00:18:42.156 "code": -32602, 00:18:42.156 "message": "Invalid cntlid range [65520-65519]" 00:18:42.156 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:42.157 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18995 -I 0 00:18:42.414 [2024-12-09 10:29:26.907342] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18995: invalid cntlid range [1-0] 00:18:42.414 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:42.414 { 00:18:42.414 "nqn": "nqn.2016-06.io.spdk:cnode18995", 00:18:42.414 "max_cntlid": 0, 00:18:42.414 "method": "nvmf_create_subsystem", 00:18:42.414 "req_id": 1 00:18:42.414 } 00:18:42.414 Got JSON-RPC error response 00:18:42.414 response: 00:18:42.414 { 00:18:42.414 "code": -32602, 00:18:42.414 "message": "Invalid cntlid range [1-0]" 00:18:42.414 }' 00:18:42.414 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:42.414 { 00:18:42.414 "nqn": "nqn.2016-06.io.spdk:cnode18995", 00:18:42.414 "max_cntlid": 0, 00:18:42.414 "method": "nvmf_create_subsystem", 00:18:42.414 "req_id": 1 00:18:42.414 } 00:18:42.414 Got JSON-RPC error response 00:18:42.414 response: 00:18:42.414 { 00:18:42.414 "code": -32602, 00:18:42.414 "message": "Invalid cntlid range [1-0]" 00:18:42.414 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:42.414 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12339 -I 65520 00:18:42.672 [2024-12-09 10:29:27.276571] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12339: invalid cntlid range [1-65520] 00:18:42.672 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:42.672 { 00:18:42.672 "nqn": "nqn.2016-06.io.spdk:cnode12339", 00:18:42.672 "max_cntlid": 65520, 00:18:42.672 "method": "nvmf_create_subsystem", 00:18:42.672 "req_id": 1 00:18:42.672 } 00:18:42.672 Got JSON-RPC error response 00:18:42.672 response: 00:18:42.672 { 00:18:42.672 "code": -32602, 00:18:42.672 "message": "Invalid cntlid range [1-65520]" 00:18:42.672 }' 00:18:42.672 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:42.672 { 00:18:42.672 "nqn": "nqn.2016-06.io.spdk:cnode12339", 00:18:42.672 "max_cntlid": 65520, 00:18:42.672 "method": "nvmf_create_subsystem", 00:18:42.672 "req_id": 1 00:18:42.672 } 00:18:42.672 Got JSON-RPC error response 00:18:42.672 response: 00:18:42.672 { 00:18:42.672 "code": -32602, 00:18:42.672 "message": "Invalid cntlid range [1-65520]" 00:18:42.672 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:42.672 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25086 -i 6 -I 5 00:18:42.931 [2024-12-09 10:29:27.561515] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25086: invalid cntlid range [6-5] 00:18:42.931 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:42.931 { 00:18:42.931 "nqn": "nqn.2016-06.io.spdk:cnode25086", 00:18:42.931 "min_cntlid": 6, 00:18:42.931 "max_cntlid": 5, 00:18:42.931 "method": "nvmf_create_subsystem", 00:18:42.931 "req_id": 1 00:18:42.931 } 00:18:42.931 Got JSON-RPC error response 00:18:42.931 response: 00:18:42.931 { 00:18:42.931 "code": -32602, 00:18:42.931 "message": "Invalid cntlid range [6-5]" 00:18:42.931 }' 00:18:42.931 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:42.931 { 00:18:42.931 "nqn": "nqn.2016-06.io.spdk:cnode25086", 00:18:42.931 "min_cntlid": 6, 00:18:42.931 "max_cntlid": 5, 00:18:42.931 "method": "nvmf_create_subsystem", 00:18:42.931 "req_id": 1 00:18:42.931 } 00:18:42.931 Got JSON-RPC error response 00:18:42.931 response: 00:18:42.931 { 00:18:42.931 "code": -32602, 00:18:42.931 "message": "Invalid cntlid range [6-5]" 00:18:42.931 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:42.931 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:43.190 { 00:18:43.190 "name": "foobar", 00:18:43.190 "method": "nvmf_delete_target", 00:18:43.190 "req_id": 1 00:18:43.190 } 00:18:43.190 Got JSON-RPC error response 00:18:43.190 response: 00:18:43.190 { 00:18:43.190 "code": -32602, 00:18:43.190 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:43.190 }' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:43.190 { 00:18:43.190 "name": "foobar", 00:18:43.190 "method": "nvmf_delete_target", 00:18:43.190 "req_id": 1 00:18:43.190 } 00:18:43.190 Got JSON-RPC error response 00:18:43.190 response: 00:18:43.190 { 00:18:43.190 "code": -32602, 00:18:43.190 "message": "The specified target doesn't exist, cannot delete it." 00:18:43.190 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.190 rmmod nvme_tcp 00:18:43.190 rmmod nvme_fabrics 00:18:43.190 rmmod nvme_keyring 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2057377 ']' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2057377 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2057377 ']' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2057377 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2057377 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2057377' 00:18:43.190 killing process with pid 2057377 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2057377 00:18:43.190 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2057377 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.761 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.764 00:18:45.764 real 0m13.110s 00:18:45.764 user 0m35.211s 00:18:45.764 sys 0m3.722s 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.764 ************************************ 00:18:45.764 END TEST nvmf_invalid 00:18:45.764 ************************************ 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.764 ************************************ 00:18:45.764 START TEST nvmf_connect_stress 00:18:45.764 ************************************ 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:45.764 * Looking for test storage... 00:18:45.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.764 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.765 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.023 --rc genhtml_branch_coverage=1 00:18:46.023 --rc genhtml_function_coverage=1 00:18:46.023 --rc genhtml_legend=1 00:18:46.023 --rc geninfo_all_blocks=1 00:18:46.023 --rc geninfo_unexecuted_blocks=1 00:18:46.023 00:18:46.023 ' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.023 --rc genhtml_branch_coverage=1 00:18:46.023 --rc genhtml_function_coverage=1 00:18:46.023 --rc genhtml_legend=1 00:18:46.023 --rc geninfo_all_blocks=1 00:18:46.023 --rc geninfo_unexecuted_blocks=1 00:18:46.023 00:18:46.023 ' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.023 --rc genhtml_branch_coverage=1 00:18:46.023 --rc genhtml_function_coverage=1 00:18:46.023 --rc genhtml_legend=1 00:18:46.023 --rc geninfo_all_blocks=1 00:18:46.023 --rc geninfo_unexecuted_blocks=1 00:18:46.023 00:18:46.023 ' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.023 --rc genhtml_branch_coverage=1 00:18:46.023 --rc genhtml_function_coverage=1 00:18:46.023 --rc genhtml_legend=1 00:18:46.023 --rc geninfo_all_blocks=1 00:18:46.023 --rc geninfo_unexecuted_blocks=1 00:18:46.023 00:18:46.023 ' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.023 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.024 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:49.319 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:49.320 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:49.320 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:49.320 Found net devices under 0000:84:00.0: cvl_0_0 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:49.320 Found net devices under 0000:84:00.1: cvl_0_1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:18:49.320 00:18:49.320 --- 10.0.0.2 ping statistics --- 00:18:49.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.320 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:18:49.320 00:18:49.320 --- 10.0.0.1 ping statistics --- 00:18:49.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.320 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.320 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2060439 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2060439 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2060439 ']' 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.321 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.321 [2024-12-09 10:29:33.668516] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:18:49.321 [2024-12-09 10:29:33.668628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.321 [2024-12-09 10:29:33.767961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.321 [2024-12-09 10:29:33.842285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.321 [2024-12-09 10:29:33.842362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.321 [2024-12-09 10:29:33.842383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.321 [2024-12-09 10:29:33.842399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.321 [2024-12-09 10:29:33.842412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.321 [2024-12-09 10:29:33.844383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.321 [2024-12-09 10:29:33.844457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.321 [2024-12-09 10:29:33.844461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.580 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.580 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:49.580 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.580 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.580 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.580 [2024-12-09 10:29:34.012921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.580 [2024-12-09 10:29:34.030958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.580 NULL1 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2060460 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:49.580 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.581 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.839 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.839 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:49.839 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.839 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.839 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.405 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.405 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:50.405 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.405 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.405 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.670 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.670 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:50.670 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.670 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.670 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.928 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.928 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:50.928 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.928 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.928 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.185 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.185 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:51.185 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.185 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.185 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.442 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.442 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:51.442 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.442 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.442 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.009 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.009 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:52.009 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.009 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.009 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.267 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.267 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:52.267 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.267 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.267 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.525 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.525 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:52.525 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.525 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.525 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.783 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.783 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:52.783 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.783 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.783 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.042 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.042 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:53.042 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.042 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.042 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.607 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.608 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:53.608 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.608 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.608 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.865 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.865 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:53.865 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.865 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.865 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:54.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.123 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.381 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.381 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:54.381 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.381 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.381 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:54.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.637 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.203 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.203 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:55.203 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.203 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.203 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.461 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.461 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:55.461 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.461 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.461 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.786 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.786 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:55.786 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.786 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.786 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.043 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.043 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:56.043 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.043 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.043 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.299 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.299 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:56.299 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.299 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.299 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.556 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.556 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:56.556 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.556 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.556 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.121 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.121 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:57.121 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.121 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.121 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.379 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.379 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:57.379 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.379 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.379 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.636 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.636 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:57.636 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.636 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.636 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.895 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.895 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:57.895 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.895 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.895 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.163 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.163 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:58.163 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.163 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.163 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.730 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.730 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:58.730 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.730 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.730 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.989 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.989 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:58.989 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.989 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.989 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.247 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.247 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:59.247 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.247 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.247 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.505 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.505 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:59.505 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.505 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.505 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.763 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2060460 00:18:59.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2060460) - No such process 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2060460 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.763 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.763 rmmod nvme_tcp 00:19:00.023 rmmod nvme_fabrics 00:19:00.023 rmmod nvme_keyring 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2060439 ']' 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2060439 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2060439 ']' 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2060439 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2060439 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2060439' 00:19:00.023 killing process with pid 2060439 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2060439 00:19:00.023 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2060439 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.284 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.829 00:19:02.829 real 0m16.553s 00:19:02.829 user 0m38.725s 00:19:02.829 sys 0m7.152s 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.829 ************************************ 00:19:02.829 END TEST nvmf_connect_stress 00:19:02.829 ************************************ 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.829 ************************************ 00:19:02.829 START TEST nvmf_fused_ordering 00:19:02.829 ************************************ 00:19:02.829 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:02.829 * Looking for test storage... 00:19:02.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.829 --rc genhtml_branch_coverage=1 00:19:02.829 --rc genhtml_function_coverage=1 00:19:02.829 --rc genhtml_legend=1 00:19:02.829 --rc geninfo_all_blocks=1 00:19:02.829 --rc geninfo_unexecuted_blocks=1 00:19:02.829 00:19:02.829 ' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.829 --rc genhtml_branch_coverage=1 00:19:02.829 --rc genhtml_function_coverage=1 00:19:02.829 --rc genhtml_legend=1 00:19:02.829 --rc geninfo_all_blocks=1 00:19:02.829 --rc geninfo_unexecuted_blocks=1 00:19:02.829 00:19:02.829 ' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.829 --rc genhtml_branch_coverage=1 00:19:02.829 --rc genhtml_function_coverage=1 00:19:02.829 --rc genhtml_legend=1 00:19:02.829 --rc geninfo_all_blocks=1 00:19:02.829 --rc geninfo_unexecuted_blocks=1 00:19:02.829 00:19:02.829 ' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.829 --rc genhtml_branch_coverage=1 00:19:02.829 --rc genhtml_function_coverage=1 00:19:02.829 --rc genhtml_legend=1 00:19:02.829 --rc geninfo_all_blocks=1 00:19:02.829 --rc geninfo_unexecuted_blocks=1 00:19:02.829 00:19:02.829 ' 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.829 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.830 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:06.123 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:06.123 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:06.123 Found net devices under 0000:84:00.0: cvl_0_0 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:06.123 Found net devices under 0000:84:00.1: cvl_0_1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:19:06.123 00:19:06.123 --- 10.0.0.2 ping statistics --- 00:19:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.123 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:19:06.123 00:19:06.123 --- 10.0.0.1 ping statistics --- 00:19:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.123 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2063755 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2063755 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2063755 ']' 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.123 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.123 [2024-12-09 10:29:50.638563] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:19:06.123 [2024-12-09 10:29:50.638666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.382 [2024-12-09 10:29:50.785607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.382 [2024-12-09 10:29:50.900864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.382 [2024-12-09 10:29:50.900978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.382 [2024-12-09 10:29:50.901016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.382 [2024-12-09 10:29:50.901047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.382 [2024-12-09 10:29:50.901072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.382 [2024-12-09 10:29:50.902016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 [2024-12-09 10:29:51.209757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 [2024-12-09 10:29:51.234271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 NULL1 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.740 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:06.740 [2024-12-09 10:29:51.295369] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:19:06.740 [2024-12-09 10:29:51.295414] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063900 ] 00:19:07.681 Attached to nqn.2016-06.io.spdk:cnode1 00:19:07.681 Namespace ID: 1 size: 1GB 00:19:07.681 fused_ordering(0) 00:19:07.681 fused_ordering(1) 00:19:07.681 fused_ordering(2) 00:19:07.681 fused_ordering(3) 00:19:07.681 fused_ordering(4) 00:19:07.681 fused_ordering(5) 00:19:07.681 fused_ordering(6) 00:19:07.681 fused_ordering(7) 00:19:07.681 fused_ordering(8) 00:19:07.681 fused_ordering(9) 00:19:07.681 fused_ordering(10) 00:19:07.681 fused_ordering(11) 00:19:07.681 fused_ordering(12) 00:19:07.681 fused_ordering(13) 00:19:07.681 fused_ordering(14) 00:19:07.682 fused_ordering(15) 00:19:07.682 fused_ordering(16) 00:19:07.682 fused_ordering(17) 00:19:07.682 fused_ordering(18) 00:19:07.682 fused_ordering(19) 00:19:07.682 fused_ordering(20) 00:19:07.682 fused_ordering(21) 00:19:07.682 fused_ordering(22) 00:19:07.682 fused_ordering(23) 00:19:07.682 fused_ordering(24) 00:19:07.682 fused_ordering(25) 00:19:07.682 fused_ordering(26) 00:19:07.682 fused_ordering(27) 00:19:07.682 fused_ordering(28) 00:19:07.682 fused_ordering(29) 00:19:07.682 fused_ordering(30) 00:19:07.682 fused_ordering(31) 00:19:07.682 fused_ordering(32) 00:19:07.682 fused_ordering(33) 00:19:07.682 fused_ordering(34) 00:19:07.682 fused_ordering(35) 00:19:07.682 fused_ordering(36) 00:19:07.682 fused_ordering(37) 00:19:07.682 fused_ordering(38) 00:19:07.682 fused_ordering(39) 00:19:07.682 fused_ordering(40) 00:19:07.682 fused_ordering(41) 00:19:07.682 fused_ordering(42) 00:19:07.682 fused_ordering(43) 00:19:07.682 fused_ordering(44) 00:19:07.682 fused_ordering(45) 00:19:07.682 fused_ordering(46) 00:19:07.682 fused_ordering(47) 00:19:07.682 fused_ordering(48) 00:19:07.682 fused_ordering(49) 00:19:07.682 fused_ordering(50) 00:19:07.682 fused_ordering(51) 00:19:07.682 fused_ordering(52) 00:19:07.682 fused_ordering(53) 00:19:07.682 fused_ordering(54) 00:19:07.682 fused_ordering(55) 00:19:07.682 fused_ordering(56) 00:19:07.682 fused_ordering(57) 00:19:07.682 fused_ordering(58) 00:19:07.682 fused_ordering(59) 00:19:07.682 fused_ordering(60) 00:19:07.682 fused_ordering(61) 00:19:07.682 fused_ordering(62) 00:19:07.682 fused_ordering(63) 00:19:07.682 fused_ordering(64) 00:19:07.682 fused_ordering(65) 00:19:07.682 fused_ordering(66) 00:19:07.682 fused_ordering(67) 00:19:07.682 fused_ordering(68) 00:19:07.682 fused_ordering(69) 00:19:07.682 fused_ordering(70) 00:19:07.682 fused_ordering(71) 00:19:07.682 fused_ordering(72) 00:19:07.682 fused_ordering(73) 00:19:07.682 fused_ordering(74) 00:19:07.682 fused_ordering(75) 00:19:07.682 fused_ordering(76) 00:19:07.682 fused_ordering(77) 00:19:07.682 fused_ordering(78) 00:19:07.682 fused_ordering(79) 00:19:07.682 fused_ordering(80) 00:19:07.682 fused_ordering(81) 00:19:07.682 fused_ordering(82) 00:19:07.682 fused_ordering(83) 00:19:07.682 fused_ordering(84) 00:19:07.682 fused_ordering(85) 00:19:07.682 fused_ordering(86) 00:19:07.682 fused_ordering(87) 00:19:07.682 fused_ordering(88) 00:19:07.682 fused_ordering(89) 00:19:07.682 fused_ordering(90) 00:19:07.682 fused_ordering(91) 00:19:07.682 fused_ordering(92) 00:19:07.682 fused_ordering(93) 00:19:07.682 fused_ordering(94) 00:19:07.682 fused_ordering(95) 00:19:07.682 fused_ordering(96) 00:19:07.682 fused_ordering(97) 00:19:07.682 fused_ordering(98) 00:19:07.682 fused_ordering(99) 00:19:07.682 fused_ordering(100) 00:19:07.682 fused_ordering(101) 00:19:07.682 fused_ordering(102) 00:19:07.682 fused_ordering(103) 00:19:07.682 fused_ordering(104) 00:19:07.682 fused_ordering(105) 00:19:07.682 fused_ordering(106) 00:19:07.682 fused_ordering(107) 00:19:07.682 fused_ordering(108) 00:19:07.682 fused_ordering(109) 00:19:07.682 fused_ordering(110) 00:19:07.682 fused_ordering(111) 00:19:07.682 fused_ordering(112) 00:19:07.682 fused_ordering(113) 00:19:07.682 fused_ordering(114) 00:19:07.682 fused_ordering(115) 00:19:07.682 fused_ordering(116) 00:19:07.682 fused_ordering(117) 00:19:07.682 fused_ordering(118) 00:19:07.682 fused_ordering(119) 00:19:07.682 fused_ordering(120) 00:19:07.682 fused_ordering(121) 00:19:07.682 fused_ordering(122) 00:19:07.682 fused_ordering(123) 00:19:07.682 fused_ordering(124) 00:19:07.682 fused_ordering(125) 00:19:07.682 fused_ordering(126) 00:19:07.682 fused_ordering(127) 00:19:07.682 fused_ordering(128) 00:19:07.682 fused_ordering(129) 00:19:07.682 fused_ordering(130) 00:19:07.682 fused_ordering(131) 00:19:07.682 fused_ordering(132) 00:19:07.682 fused_ordering(133) 00:19:07.682 fused_ordering(134) 00:19:07.682 fused_ordering(135) 00:19:07.682 fused_ordering(136) 00:19:07.682 fused_ordering(137) 00:19:07.682 fused_ordering(138) 00:19:07.682 fused_ordering(139) 00:19:07.682 fused_ordering(140) 00:19:07.682 fused_ordering(141) 00:19:07.682 fused_ordering(142) 00:19:07.682 fused_ordering(143) 00:19:07.682 fused_ordering(144) 00:19:07.682 fused_ordering(145) 00:19:07.682 fused_ordering(146) 00:19:07.682 fused_ordering(147) 00:19:07.682 fused_ordering(148) 00:19:07.682 fused_ordering(149) 00:19:07.682 fused_ordering(150) 00:19:07.682 fused_ordering(151) 00:19:07.682 fused_ordering(152) 00:19:07.682 fused_ordering(153) 00:19:07.682 fused_ordering(154) 00:19:07.682 fused_ordering(155) 00:19:07.682 fused_ordering(156) 00:19:07.682 fused_ordering(157) 00:19:07.682 fused_ordering(158) 00:19:07.682 fused_ordering(159) 00:19:07.682 fused_ordering(160) 00:19:07.682 fused_ordering(161) 00:19:07.682 fused_ordering(162) 00:19:07.682 fused_ordering(163) 00:19:07.682 fused_ordering(164) 00:19:07.682 fused_ordering(165) 00:19:07.682 fused_ordering(166) 00:19:07.682 fused_ordering(167) 00:19:07.682 fused_ordering(168) 00:19:07.682 fused_ordering(169) 00:19:07.682 fused_ordering(170) 00:19:07.682 fused_ordering(171) 00:19:07.682 fused_ordering(172) 00:19:07.682 fused_ordering(173) 00:19:07.682 fused_ordering(174) 00:19:07.682 fused_ordering(175) 00:19:07.682 fused_ordering(176) 00:19:07.682 fused_ordering(177) 00:19:07.682 fused_ordering(178) 00:19:07.682 fused_ordering(179) 00:19:07.682 fused_ordering(180) 00:19:07.682 fused_ordering(181) 00:19:07.682 fused_ordering(182) 00:19:07.682 fused_ordering(183) 00:19:07.682 fused_ordering(184) 00:19:07.682 fused_ordering(185) 00:19:07.682 fused_ordering(186) 00:19:07.682 fused_ordering(187) 00:19:07.682 fused_ordering(188) 00:19:07.682 fused_ordering(189) 00:19:07.682 fused_ordering(190) 00:19:07.682 fused_ordering(191) 00:19:07.682 fused_ordering(192) 00:19:07.682 fused_ordering(193) 00:19:07.682 fused_ordering(194) 00:19:07.682 fused_ordering(195) 00:19:07.682 fused_ordering(196) 00:19:07.682 fused_ordering(197) 00:19:07.682 fused_ordering(198) 00:19:07.682 fused_ordering(199) 00:19:07.682 fused_ordering(200) 00:19:07.682 fused_ordering(201) 00:19:07.682 fused_ordering(202) 00:19:07.682 fused_ordering(203) 00:19:07.682 fused_ordering(204) 00:19:07.682 fused_ordering(205) 00:19:08.623 fused_ordering(206) 00:19:08.623 fused_ordering(207) 00:19:08.623 fused_ordering(208) 00:19:08.623 fused_ordering(209) 00:19:08.623 fused_ordering(210) 00:19:08.623 fused_ordering(211) 00:19:08.623 fused_ordering(212) 00:19:08.623 fused_ordering(213) 00:19:08.623 fused_ordering(214) 00:19:08.623 fused_ordering(215) 00:19:08.623 fused_ordering(216) 00:19:08.623 fused_ordering(217) 00:19:08.623 fused_ordering(218) 00:19:08.623 fused_ordering(219) 00:19:08.623 fused_ordering(220) 00:19:08.623 fused_ordering(221) 00:19:08.623 fused_ordering(222) 00:19:08.623 fused_ordering(223) 00:19:08.623 fused_ordering(224) 00:19:08.623 fused_ordering(225) 00:19:08.623 fused_ordering(226) 00:19:08.623 fused_ordering(227) 00:19:08.623 fused_ordering(228) 00:19:08.623 fused_ordering(229) 00:19:08.623 fused_ordering(230) 00:19:08.623 fused_ordering(231) 00:19:08.623 fused_ordering(232) 00:19:08.623 fused_ordering(233) 00:19:08.623 fused_ordering(234) 00:19:08.623 fused_ordering(235) 00:19:08.623 fused_ordering(236) 00:19:08.623 fused_ordering(237) 00:19:08.623 fused_ordering(238) 00:19:08.623 fused_ordering(239) 00:19:08.623 fused_ordering(240) 00:19:08.623 fused_ordering(241) 00:19:08.623 fused_ordering(242) 00:19:08.623 fused_ordering(243) 00:19:08.623 fused_ordering(244) 00:19:08.623 fused_ordering(245) 00:19:08.624 fused_ordering(246) 00:19:08.624 fused_ordering(247) 00:19:08.624 fused_ordering(248) 00:19:08.624 fused_ordering(249) 00:19:08.624 fused_ordering(250) 00:19:08.624 fused_ordering(251) 00:19:08.624 fused_ordering(252) 00:19:08.624 fused_ordering(253) 00:19:08.624 fused_ordering(254) 00:19:08.624 fused_ordering(255) 00:19:08.624 fused_ordering(256) 00:19:08.624 fused_ordering(257) 00:19:08.624 fused_ordering(258) 00:19:08.624 fused_ordering(259) 00:19:08.624 fused_ordering(260) 00:19:08.624 fused_ordering(261) 00:19:08.624 fused_ordering(262) 00:19:08.624 fused_ordering(263) 00:19:08.624 fused_ordering(264) 00:19:08.624 fused_ordering(265) 00:19:08.624 fused_ordering(266) 00:19:08.624 fused_ordering(267) 00:19:08.624 fused_ordering(268) 00:19:08.624 fused_ordering(269) 00:19:08.624 fused_ordering(270) 00:19:08.624 fused_ordering(271) 00:19:08.624 fused_ordering(272) 00:19:08.624 fused_ordering(273) 00:19:08.624 fused_ordering(274) 00:19:08.624 fused_ordering(275) 00:19:08.624 fused_ordering(276) 00:19:08.624 fused_ordering(277) 00:19:08.624 fused_ordering(278) 00:19:08.624 fused_ordering(279) 00:19:08.624 fused_ordering(280) 00:19:08.624 fused_ordering(281) 00:19:08.624 fused_ordering(282) 00:19:08.624 fused_ordering(283) 00:19:08.624 fused_ordering(284) 00:19:08.624 fused_ordering(285) 00:19:08.624 fused_ordering(286) 00:19:08.624 fused_ordering(287) 00:19:08.624 fused_ordering(288) 00:19:08.624 fused_ordering(289) 00:19:08.624 fused_ordering(290) 00:19:08.624 fused_ordering(291) 00:19:08.624 fused_ordering(292) 00:19:08.624 fused_ordering(293) 00:19:08.624 fused_ordering(294) 00:19:08.624 fused_ordering(295) 00:19:08.624 fused_ordering(296) 00:19:08.624 fused_ordering(297) 00:19:08.624 fused_ordering(298) 00:19:08.624 fused_ordering(299) 00:19:08.624 fused_ordering(300) 00:19:08.624 fused_ordering(301) 00:19:08.624 fused_ordering(302) 00:19:08.624 fused_ordering(303) 00:19:08.624 fused_ordering(304) 00:19:08.624 fused_ordering(305) 00:19:08.624 fused_ordering(306) 00:19:08.624 fused_ordering(307) 00:19:08.624 fused_ordering(308) 00:19:08.624 fused_ordering(309) 00:19:08.624 fused_ordering(310) 00:19:08.624 fused_ordering(311) 00:19:08.624 fused_ordering(312) 00:19:08.624 fused_ordering(313) 00:19:08.624 fused_ordering(314) 00:19:08.624 fused_ordering(315) 00:19:08.624 fused_ordering(316) 00:19:08.624 fused_ordering(317) 00:19:08.624 fused_ordering(318) 00:19:08.624 fused_ordering(319) 00:19:08.624 fused_ordering(320) 00:19:08.624 fused_ordering(321) 00:19:08.624 fused_ordering(322) 00:19:08.624 fused_ordering(323) 00:19:08.624 fused_ordering(324) 00:19:08.624 fused_ordering(325) 00:19:08.624 fused_ordering(326) 00:19:08.624 fused_ordering(327) 00:19:08.624 fused_ordering(328) 00:19:08.624 fused_ordering(329) 00:19:08.624 fused_ordering(330) 00:19:08.624 fused_ordering(331) 00:19:08.624 fused_ordering(332) 00:19:08.624 fused_ordering(333) 00:19:08.624 fused_ordering(334) 00:19:08.624 fused_ordering(335) 00:19:08.624 fused_ordering(336) 00:19:08.624 fused_ordering(337) 00:19:08.624 fused_ordering(338) 00:19:08.624 fused_ordering(339) 00:19:08.624 fused_ordering(340) 00:19:08.624 fused_ordering(341) 00:19:08.624 fused_ordering(342) 00:19:08.624 fused_ordering(343) 00:19:08.624 fused_ordering(344) 00:19:08.624 fused_ordering(345) 00:19:08.624 fused_ordering(346) 00:19:08.624 fused_ordering(347) 00:19:08.624 fused_ordering(348) 00:19:08.624 fused_ordering(349) 00:19:08.624 fused_ordering(350) 00:19:08.624 fused_ordering(351) 00:19:08.624 fused_ordering(352) 00:19:08.624 fused_ordering(353) 00:19:08.624 fused_ordering(354) 00:19:08.624 fused_ordering(355) 00:19:08.624 fused_ordering(356) 00:19:08.624 fused_ordering(357) 00:19:08.624 fused_ordering(358) 00:19:08.624 fused_ordering(359) 00:19:08.624 fused_ordering(360) 00:19:08.624 fused_ordering(361) 00:19:08.624 fused_ordering(362) 00:19:08.624 fused_ordering(363) 00:19:08.624 fused_ordering(364) 00:19:08.624 fused_ordering(365) 00:19:08.624 fused_ordering(366) 00:19:08.624 fused_ordering(367) 00:19:08.624 fused_ordering(368) 00:19:08.624 fused_ordering(369) 00:19:08.624 fused_ordering(370) 00:19:08.624 fused_ordering(371) 00:19:08.624 fused_ordering(372) 00:19:08.624 fused_ordering(373) 00:19:08.624 fused_ordering(374) 00:19:08.624 fused_ordering(375) 00:19:08.624 fused_ordering(376) 00:19:08.624 fused_ordering(377) 00:19:08.624 fused_ordering(378) 00:19:08.624 fused_ordering(379) 00:19:08.624 fused_ordering(380) 00:19:08.624 fused_ordering(381) 00:19:08.624 fused_ordering(382) 00:19:08.624 fused_ordering(383) 00:19:08.624 fused_ordering(384) 00:19:08.624 fused_ordering(385) 00:19:08.624 fused_ordering(386) 00:19:08.624 fused_ordering(387) 00:19:08.624 fused_ordering(388) 00:19:08.624 fused_ordering(389) 00:19:08.624 fused_ordering(390) 00:19:08.624 fused_ordering(391) 00:19:08.624 fused_ordering(392) 00:19:08.624 fused_ordering(393) 00:19:08.624 fused_ordering(394) 00:19:08.624 fused_ordering(395) 00:19:08.624 fused_ordering(396) 00:19:08.624 fused_ordering(397) 00:19:08.624 fused_ordering(398) 00:19:08.624 fused_ordering(399) 00:19:08.624 fused_ordering(400) 00:19:08.624 fused_ordering(401) 00:19:08.624 fused_ordering(402) 00:19:08.624 fused_ordering(403) 00:19:08.624 fused_ordering(404) 00:19:08.624 fused_ordering(405) 00:19:08.624 fused_ordering(406) 00:19:08.624 fused_ordering(407) 00:19:08.624 fused_ordering(408) 00:19:08.624 fused_ordering(409) 00:19:08.624 fused_ordering(410) 00:19:09.566 fused_ordering(411) 00:19:09.566 fused_ordering(412) 00:19:09.566 fused_ordering(413) 00:19:09.566 fused_ordering(414) 00:19:09.566 fused_ordering(415) 00:19:09.566 fused_ordering(416) 00:19:09.566 fused_ordering(417) 00:19:09.566 fused_ordering(418) 00:19:09.566 fused_ordering(419) 00:19:09.566 fused_ordering(420) 00:19:09.566 fused_ordering(421) 00:19:09.566 fused_ordering(422) 00:19:09.566 fused_ordering(423) 00:19:09.566 fused_ordering(424) 00:19:09.566 fused_ordering(425) 00:19:09.566 fused_ordering(426) 00:19:09.566 fused_ordering(427) 00:19:09.566 fused_ordering(428) 00:19:09.566 fused_ordering(429) 00:19:09.566 fused_ordering(430) 00:19:09.566 fused_ordering(431) 00:19:09.566 fused_ordering(432) 00:19:09.566 fused_ordering(433) 00:19:09.566 fused_ordering(434) 00:19:09.566 fused_ordering(435) 00:19:09.566 fused_ordering(436) 00:19:09.566 fused_ordering(437) 00:19:09.566 fused_ordering(438) 00:19:09.566 fused_ordering(439) 00:19:09.566 fused_ordering(440) 00:19:09.566 fused_ordering(441) 00:19:09.566 fused_ordering(442) 00:19:09.566 fused_ordering(443) 00:19:09.566 fused_ordering(444) 00:19:09.566 fused_ordering(445) 00:19:09.566 fused_ordering(446) 00:19:09.566 fused_ordering(447) 00:19:09.566 fused_ordering(448) 00:19:09.566 fused_ordering(449) 00:19:09.566 fused_ordering(450) 00:19:09.566 fused_ordering(451) 00:19:09.566 fused_ordering(452) 00:19:09.566 fused_ordering(453) 00:19:09.566 fused_ordering(454) 00:19:09.566 fused_ordering(455) 00:19:09.566 fused_ordering(456) 00:19:09.566 fused_ordering(457) 00:19:09.566 fused_ordering(458) 00:19:09.566 fused_ordering(459) 00:19:09.566 fused_ordering(460) 00:19:09.566 fused_ordering(461) 00:19:09.566 fused_ordering(462) 00:19:09.566 fused_ordering(463) 00:19:09.566 fused_ordering(464) 00:19:09.566 fused_ordering(465) 00:19:09.566 fused_ordering(466) 00:19:09.566 fused_ordering(467) 00:19:09.566 fused_ordering(468) 00:19:09.566 fused_ordering(469) 00:19:09.566 fused_ordering(470) 00:19:09.566 fused_ordering(471) 00:19:09.566 fused_ordering(472) 00:19:09.566 fused_ordering(473) 00:19:09.566 fused_ordering(474) 00:19:09.566 fused_ordering(475) 00:19:09.566 fused_ordering(476) 00:19:09.566 fused_ordering(477) 00:19:09.566 fused_ordering(478) 00:19:09.566 fused_ordering(479) 00:19:09.566 fused_ordering(480) 00:19:09.566 fused_ordering(481) 00:19:09.566 fused_ordering(482) 00:19:09.566 fused_ordering(483) 00:19:09.566 fused_ordering(484) 00:19:09.566 fused_ordering(485) 00:19:09.566 fused_ordering(486) 00:19:09.566 fused_ordering(487) 00:19:09.566 fused_ordering(488) 00:19:09.566 fused_ordering(489) 00:19:09.566 fused_ordering(490) 00:19:09.566 fused_ordering(491) 00:19:09.566 fused_ordering(492) 00:19:09.566 fused_ordering(493) 00:19:09.566 fused_ordering(494) 00:19:09.566 fused_ordering(495) 00:19:09.566 fused_ordering(496) 00:19:09.566 fused_ordering(497) 00:19:09.566 fused_ordering(498) 00:19:09.566 fused_ordering(499) 00:19:09.566 fused_ordering(500) 00:19:09.566 fused_ordering(501) 00:19:09.566 fused_ordering(502) 00:19:09.566 fused_ordering(503) 00:19:09.566 fused_ordering(504) 00:19:09.566 fused_ordering(505) 00:19:09.566 fused_ordering(506) 00:19:09.566 fused_ordering(507) 00:19:09.566 fused_ordering(508) 00:19:09.566 fused_ordering(509) 00:19:09.566 fused_ordering(510) 00:19:09.566 fused_ordering(511) 00:19:09.566 fused_ordering(512) 00:19:09.566 fused_ordering(513) 00:19:09.566 fused_ordering(514) 00:19:09.566 fused_ordering(515) 00:19:09.566 fused_ordering(516) 00:19:09.566 fused_ordering(517) 00:19:09.566 fused_ordering(518) 00:19:09.566 fused_ordering(519) 00:19:09.566 fused_ordering(520) 00:19:09.566 fused_ordering(521) 00:19:09.566 fused_ordering(522) 00:19:09.566 fused_ordering(523) 00:19:09.566 fused_ordering(524) 00:19:09.566 fused_ordering(525) 00:19:09.566 fused_ordering(526) 00:19:09.566 fused_ordering(527) 00:19:09.566 fused_ordering(528) 00:19:09.566 fused_ordering(529) 00:19:09.566 fused_ordering(530) 00:19:09.566 fused_ordering(531) 00:19:09.566 fused_ordering(532) 00:19:09.566 fused_ordering(533) 00:19:09.566 fused_ordering(534) 00:19:09.566 fused_ordering(535) 00:19:09.566 fused_ordering(536) 00:19:09.566 fused_ordering(537) 00:19:09.566 fused_ordering(538) 00:19:09.566 fused_ordering(539) 00:19:09.566 fused_ordering(540) 00:19:09.566 fused_ordering(541) 00:19:09.566 fused_ordering(542) 00:19:09.566 fused_ordering(543) 00:19:09.566 fused_ordering(544) 00:19:09.566 fused_ordering(545) 00:19:09.566 fused_ordering(546) 00:19:09.566 fused_ordering(547) 00:19:09.566 fused_ordering(548) 00:19:09.566 fused_ordering(549) 00:19:09.566 fused_ordering(550) 00:19:09.566 fused_ordering(551) 00:19:09.566 fused_ordering(552) 00:19:09.566 fused_ordering(553) 00:19:09.566 fused_ordering(554) 00:19:09.566 fused_ordering(555) 00:19:09.566 fused_ordering(556) 00:19:09.566 fused_ordering(557) 00:19:09.566 fused_ordering(558) 00:19:09.566 fused_ordering(559) 00:19:09.566 fused_ordering(560) 00:19:09.566 fused_ordering(561) 00:19:09.566 fused_ordering(562) 00:19:09.566 fused_ordering(563) 00:19:09.566 fused_ordering(564) 00:19:09.566 fused_ordering(565) 00:19:09.566 fused_ordering(566) 00:19:09.566 fused_ordering(567) 00:19:09.566 fused_ordering(568) 00:19:09.566 fused_ordering(569) 00:19:09.566 fused_ordering(570) 00:19:09.566 fused_ordering(571) 00:19:09.566 fused_ordering(572) 00:19:09.566 fused_ordering(573) 00:19:09.566 fused_ordering(574) 00:19:09.566 fused_ordering(575) 00:19:09.566 fused_ordering(576) 00:19:09.566 fused_ordering(577) 00:19:09.566 fused_ordering(578) 00:19:09.566 fused_ordering(579) 00:19:09.566 fused_ordering(580) 00:19:09.566 fused_ordering(581) 00:19:09.566 fused_ordering(582) 00:19:09.566 fused_ordering(583) 00:19:09.566 fused_ordering(584) 00:19:09.566 fused_ordering(585) 00:19:09.566 fused_ordering(586) 00:19:09.566 fused_ordering(587) 00:19:09.566 fused_ordering(588) 00:19:09.566 fused_ordering(589) 00:19:09.566 fused_ordering(590) 00:19:09.566 fused_ordering(591) 00:19:09.566 fused_ordering(592) 00:19:09.566 fused_ordering(593) 00:19:09.566 fused_ordering(594) 00:19:09.566 fused_ordering(595) 00:19:09.566 fused_ordering(596) 00:19:09.566 fused_ordering(597) 00:19:09.566 fused_ordering(598) 00:19:09.566 fused_ordering(599) 00:19:09.566 fused_ordering(600) 00:19:09.566 fused_ordering(601) 00:19:09.566 fused_ordering(602) 00:19:09.566 fused_ordering(603) 00:19:09.566 fused_ordering(604) 00:19:09.566 fused_ordering(605) 00:19:09.567 fused_ordering(606) 00:19:09.567 fused_ordering(607) 00:19:09.567 fused_ordering(608) 00:19:09.567 fused_ordering(609) 00:19:09.567 fused_ordering(610) 00:19:09.567 fused_ordering(611) 00:19:09.567 fused_ordering(612) 00:19:09.567 fused_ordering(613) 00:19:09.567 fused_ordering(614) 00:19:09.567 fused_ordering(615) 00:19:10.949 fused_ordering(616) 00:19:10.949 fused_ordering(617) 00:19:10.949 fused_ordering(618) 00:19:10.949 fused_ordering(619) 00:19:10.949 fused_ordering(620) 00:19:10.949 fused_ordering(621) 00:19:10.949 fused_ordering(622) 00:19:10.949 fused_ordering(623) 00:19:10.949 fused_ordering(624) 00:19:10.949 fused_ordering(625) 00:19:10.949 fused_ordering(626) 00:19:10.949 fused_ordering(627) 00:19:10.949 fused_ordering(628) 00:19:10.949 fused_ordering(629) 00:19:10.949 fused_ordering(630) 00:19:10.949 fused_ordering(631) 00:19:10.949 fused_ordering(632) 00:19:10.949 fused_ordering(633) 00:19:10.949 fused_ordering(634) 00:19:10.949 fused_ordering(635) 00:19:10.949 fused_ordering(636) 00:19:10.949 fused_ordering(637) 00:19:10.949 fused_ordering(638) 00:19:10.949 fused_ordering(639) 00:19:10.949 fused_ordering(640) 00:19:10.949 fused_ordering(641) 00:19:10.949 fused_ordering(642) 00:19:10.949 fused_ordering(643) 00:19:10.949 fused_ordering(644) 00:19:10.949 fused_ordering(645) 00:19:10.949 fused_ordering(646) 00:19:10.949 fused_ordering(647) 00:19:10.949 fused_ordering(648) 00:19:10.949 fused_ordering(649) 00:19:10.949 fused_ordering(650) 00:19:10.949 fused_ordering(651) 00:19:10.949 fused_ordering(652) 00:19:10.949 fused_ordering(653) 00:19:10.949 fused_ordering(654) 00:19:10.949 fused_ordering(655) 00:19:10.949 fused_ordering(656) 00:19:10.949 fused_ordering(657) 00:19:10.949 fused_ordering(658) 00:19:10.949 fused_ordering(659) 00:19:10.949 fused_ordering(660) 00:19:10.949 fused_ordering(661) 00:19:10.949 fused_ordering(662) 00:19:10.949 fused_ordering(663) 00:19:10.949 fused_ordering(664) 00:19:10.949 fused_ordering(665) 00:19:10.949 fused_ordering(666) 00:19:10.949 fused_ordering(667) 00:19:10.949 fused_ordering(668) 00:19:10.949 fused_ordering(669) 00:19:10.949 fused_ordering(670) 00:19:10.949 fused_ordering(671) 00:19:10.949 fused_ordering(672) 00:19:10.949 fused_ordering(673) 00:19:10.949 fused_ordering(674) 00:19:10.949 fused_ordering(675) 00:19:10.949 fused_ordering(676) 00:19:10.949 fused_ordering(677) 00:19:10.949 fused_ordering(678) 00:19:10.949 fused_ordering(679) 00:19:10.949 fused_ordering(680) 00:19:10.949 fused_ordering(681) 00:19:10.949 fused_ordering(682) 00:19:10.949 fused_ordering(683) 00:19:10.949 fused_ordering(684) 00:19:10.949 fused_ordering(685) 00:19:10.949 fused_ordering(686) 00:19:10.949 fused_ordering(687) 00:19:10.949 fused_ordering(688) 00:19:10.949 fused_ordering(689) 00:19:10.949 fused_ordering(690) 00:19:10.949 fused_ordering(691) 00:19:10.949 fused_ordering(692) 00:19:10.949 fused_ordering(693) 00:19:10.949 fused_ordering(694) 00:19:10.949 fused_ordering(695) 00:19:10.949 fused_ordering(696) 00:19:10.949 fused_ordering(697) 00:19:10.949 fused_ordering(698) 00:19:10.949 fused_ordering(699) 00:19:10.949 fused_ordering(700) 00:19:10.949 fused_ordering(701) 00:19:10.949 fused_ordering(702) 00:19:10.949 fused_ordering(703) 00:19:10.949 fused_ordering(704) 00:19:10.949 fused_ordering(705) 00:19:10.949 fused_ordering(706) 00:19:10.949 fused_ordering(707) 00:19:10.949 fused_ordering(708) 00:19:10.949 fused_ordering(709) 00:19:10.949 fused_ordering(710) 00:19:10.949 fused_ordering(711) 00:19:10.949 fused_ordering(712) 00:19:10.949 fused_ordering(713) 00:19:10.949 fused_ordering(714) 00:19:10.949 fused_ordering(715) 00:19:10.950 fused_ordering(716) 00:19:10.950 fused_ordering(717) 00:19:10.950 fused_ordering(718) 00:19:10.950 fused_ordering(719) 00:19:10.950 fused_ordering(720) 00:19:10.950 fused_ordering(721) 00:19:10.950 fused_ordering(722) 00:19:10.950 fused_ordering(723) 00:19:10.950 fused_ordering(724) 00:19:10.950 fused_ordering(725) 00:19:10.950 fused_ordering(726) 00:19:10.950 fused_ordering(727) 00:19:10.950 fused_ordering(728) 00:19:10.950 fused_ordering(729) 00:19:10.950 fused_ordering(730) 00:19:10.950 fused_ordering(731) 00:19:10.950 fused_ordering(732) 00:19:10.950 fused_ordering(733) 00:19:10.950 fused_ordering(734) 00:19:10.950 fused_ordering(735) 00:19:10.950 fused_ordering(736) 00:19:10.950 fused_ordering(737) 00:19:10.950 fused_ordering(738) 00:19:10.950 fused_ordering(739) 00:19:10.950 fused_ordering(740) 00:19:10.950 fused_ordering(741) 00:19:10.950 fused_ordering(742) 00:19:10.950 fused_ordering(743) 00:19:10.950 fused_ordering(744) 00:19:10.950 fused_ordering(745) 00:19:10.950 fused_ordering(746) 00:19:10.950 fused_ordering(747) 00:19:10.950 fused_ordering(748) 00:19:10.950 fused_ordering(749) 00:19:10.950 fused_ordering(750) 00:19:10.950 fused_ordering(751) 00:19:10.950 fused_ordering(752) 00:19:10.950 fused_ordering(753) 00:19:10.950 fused_ordering(754) 00:19:10.950 fused_ordering(755) 00:19:10.950 fused_ordering(756) 00:19:10.950 fused_ordering(757) 00:19:10.950 fused_ordering(758) 00:19:10.950 fused_ordering(759) 00:19:10.950 fused_ordering(760) 00:19:10.950 fused_ordering(761) 00:19:10.950 fused_ordering(762) 00:19:10.950 fused_ordering(763) 00:19:10.950 fused_ordering(764) 00:19:10.950 fused_ordering(765) 00:19:10.950 fused_ordering(766) 00:19:10.950 fused_ordering(767) 00:19:10.950 fused_ordering(768) 00:19:10.950 fused_ordering(769) 00:19:10.950 fused_ordering(770) 00:19:10.950 fused_ordering(771) 00:19:10.950 fused_ordering(772) 00:19:10.950 fused_ordering(773) 00:19:10.950 fused_ordering(774) 00:19:10.950 fused_ordering(775) 00:19:10.950 fused_ordering(776) 00:19:10.950 fused_ordering(777) 00:19:10.950 fused_ordering(778) 00:19:10.950 fused_ordering(779) 00:19:10.950 fused_ordering(780) 00:19:10.950 fused_ordering(781) 00:19:10.950 fused_ordering(782) 00:19:10.950 fused_ordering(783) 00:19:10.950 fused_ordering(784) 00:19:10.950 fused_ordering(785) 00:19:10.950 fused_ordering(786) 00:19:10.950 fused_ordering(787) 00:19:10.950 fused_ordering(788) 00:19:10.950 fused_ordering(789) 00:19:10.950 fused_ordering(790) 00:19:10.950 fused_ordering(791) 00:19:10.950 fused_ordering(792) 00:19:10.950 fused_ordering(793) 00:19:10.950 fused_ordering(794) 00:19:10.950 fused_ordering(795) 00:19:10.950 fused_ordering(796) 00:19:10.950 fused_ordering(797) 00:19:10.950 fused_ordering(798) 00:19:10.950 fused_ordering(799) 00:19:10.950 fused_ordering(800) 00:19:10.950 fused_ordering(801) 00:19:10.950 fused_ordering(802) 00:19:10.950 fused_ordering(803) 00:19:10.950 fused_ordering(804) 00:19:10.950 fused_ordering(805) 00:19:10.950 fused_ordering(806) 00:19:10.950 fused_ordering(807) 00:19:10.950 fused_ordering(808) 00:19:10.950 fused_ordering(809) 00:19:10.950 fused_ordering(810) 00:19:10.950 fused_ordering(811) 00:19:10.950 fused_ordering(812) 00:19:10.950 fused_ordering(813) 00:19:10.950 fused_ordering(814) 00:19:10.950 fused_ordering(815) 00:19:10.950 fused_ordering(816) 00:19:10.950 fused_ordering(817) 00:19:10.950 fused_ordering(818) 00:19:10.950 fused_ordering(819) 00:19:10.950 fused_ordering(820) 00:19:12.333 fused_ordering(821) 00:19:12.333 fused_ordering(822) 00:19:12.333 fused_ordering(823) 00:19:12.333 fused_ordering(824) 00:19:12.333 fused_ordering(825) 00:19:12.333 fused_ordering(826) 00:19:12.333 fused_ordering(827) 00:19:12.333 fused_ordering(828) 00:19:12.333 fused_ordering(829) 00:19:12.333 fused_ordering(830) 00:19:12.333 fused_ordering(831) 00:19:12.333 fused_ordering(832) 00:19:12.333 fused_ordering(833) 00:19:12.333 fused_ordering(834) 00:19:12.333 fused_ordering(835) 00:19:12.333 fused_ordering(836) 00:19:12.333 fused_ordering(837) 00:19:12.333 fused_ordering(838) 00:19:12.333 fused_ordering(839) 00:19:12.333 fused_ordering(840) 00:19:12.333 fused_ordering(841) 00:19:12.333 fused_ordering(842) 00:19:12.333 fused_ordering(843) 00:19:12.333 fused_ordering(844) 00:19:12.333 fused_ordering(845) 00:19:12.333 fused_ordering(846) 00:19:12.333 fused_ordering(847) 00:19:12.333 fused_ordering(848) 00:19:12.333 fused_ordering(849) 00:19:12.333 fused_ordering(850) 00:19:12.333 fused_ordering(851) 00:19:12.333 fused_ordering(852) 00:19:12.333 fused_ordering(853) 00:19:12.333 fused_ordering(854) 00:19:12.333 fused_ordering(855) 00:19:12.333 fused_ordering(856) 00:19:12.333 fused_ordering(857) 00:19:12.333 fused_ordering(858) 00:19:12.333 fused_ordering(859) 00:19:12.333 fused_ordering(860) 00:19:12.333 fused_ordering(861) 00:19:12.333 fused_ordering(862) 00:19:12.333 fused_ordering(863) 00:19:12.333 fused_ordering(864) 00:19:12.333 fused_ordering(865) 00:19:12.333 fused_ordering(866) 00:19:12.333 fused_ordering(867) 00:19:12.333 fused_ordering(868) 00:19:12.333 fused_ordering(869) 00:19:12.333 fused_ordering(870) 00:19:12.333 fused_ordering(871) 00:19:12.333 fused_ordering(872) 00:19:12.333 fused_ordering(873) 00:19:12.333 fused_ordering(874) 00:19:12.333 fused_ordering(875) 00:19:12.333 fused_ordering(876) 00:19:12.333 fused_ordering(877) 00:19:12.333 fused_ordering(878) 00:19:12.333 fused_ordering(879) 00:19:12.333 fused_ordering(880) 00:19:12.333 fused_ordering(881) 00:19:12.333 fused_ordering(882) 00:19:12.333 fused_ordering(883) 00:19:12.333 fused_ordering(884) 00:19:12.333 fused_ordering(885) 00:19:12.333 fused_ordering(886) 00:19:12.333 fused_ordering(887) 00:19:12.333 fused_ordering(888) 00:19:12.333 fused_ordering(889) 00:19:12.333 fused_ordering(890) 00:19:12.333 fused_ordering(891) 00:19:12.333 fused_ordering(892) 00:19:12.333 fused_ordering(893) 00:19:12.333 fused_ordering(894) 00:19:12.333 fused_ordering(895) 00:19:12.333 fused_ordering(896) 00:19:12.333 fused_ordering(897) 00:19:12.333 fused_ordering(898) 00:19:12.333 fused_ordering(899) 00:19:12.333 fused_ordering(900) 00:19:12.333 fused_ordering(901) 00:19:12.333 fused_ordering(902) 00:19:12.333 fused_ordering(903) 00:19:12.333 fused_ordering(904) 00:19:12.333 fused_ordering(905) 00:19:12.333 fused_ordering(906) 00:19:12.333 fused_ordering(907) 00:19:12.333 fused_ordering(908) 00:19:12.333 fused_ordering(909) 00:19:12.333 fused_ordering(910) 00:19:12.333 fused_ordering(911) 00:19:12.333 fused_ordering(912) 00:19:12.333 fused_ordering(913) 00:19:12.333 fused_ordering(914) 00:19:12.333 fused_ordering(915) 00:19:12.333 fused_ordering(916) 00:19:12.333 fused_ordering(917) 00:19:12.333 fused_ordering(918) 00:19:12.333 fused_ordering(919) 00:19:12.333 fused_ordering(920) 00:19:12.333 fused_ordering(921) 00:19:12.333 fused_ordering(922) 00:19:12.333 fused_ordering(923) 00:19:12.333 fused_ordering(924) 00:19:12.333 fused_ordering(925) 00:19:12.333 fused_ordering(926) 00:19:12.333 fused_ordering(927) 00:19:12.333 fused_ordering(928) 00:19:12.333 fused_ordering(929) 00:19:12.333 fused_ordering(930) 00:19:12.333 fused_ordering(931) 00:19:12.333 fused_ordering(932) 00:19:12.333 fused_ordering(933) 00:19:12.333 fused_ordering(934) 00:19:12.333 fused_ordering(935) 00:19:12.333 fused_ordering(936) 00:19:12.333 fused_ordering(937) 00:19:12.333 fused_ordering(938) 00:19:12.333 fused_ordering(939) 00:19:12.333 fused_ordering(940) 00:19:12.333 fused_ordering(941) 00:19:12.333 fused_ordering(942) 00:19:12.333 fused_ordering(943) 00:19:12.333 fused_ordering(944) 00:19:12.333 fused_ordering(945) 00:19:12.333 fused_ordering(946) 00:19:12.333 fused_ordering(947) 00:19:12.333 fused_ordering(948) 00:19:12.333 fused_ordering(949) 00:19:12.333 fused_ordering(950) 00:19:12.333 fused_ordering(951) 00:19:12.333 fused_ordering(952) 00:19:12.333 fused_ordering(953) 00:19:12.333 fused_ordering(954) 00:19:12.333 fused_ordering(955) 00:19:12.333 fused_ordering(956) 00:19:12.333 fused_ordering(957) 00:19:12.333 fused_ordering(958) 00:19:12.333 fused_ordering(959) 00:19:12.333 fused_ordering(960) 00:19:12.333 fused_ordering(961) 00:19:12.333 fused_ordering(962) 00:19:12.333 fused_ordering(963) 00:19:12.333 fused_ordering(964) 00:19:12.333 fused_ordering(965) 00:19:12.333 fused_ordering(966) 00:19:12.333 fused_ordering(967) 00:19:12.333 fused_ordering(968) 00:19:12.333 fused_ordering(969) 00:19:12.333 fused_ordering(970) 00:19:12.333 fused_ordering(971) 00:19:12.333 fused_ordering(972) 00:19:12.333 fused_ordering(973) 00:19:12.333 fused_ordering(974) 00:19:12.333 fused_ordering(975) 00:19:12.333 fused_ordering(976) 00:19:12.333 fused_ordering(977) 00:19:12.333 fused_ordering(978) 00:19:12.333 fused_ordering(979) 00:19:12.333 fused_ordering(980) 00:19:12.333 fused_ordering(981) 00:19:12.333 fused_ordering(982) 00:19:12.333 fused_ordering(983) 00:19:12.333 fused_ordering(984) 00:19:12.333 fused_ordering(985) 00:19:12.333 fused_ordering(986) 00:19:12.333 fused_ordering(987) 00:19:12.333 fused_ordering(988) 00:19:12.333 fused_ordering(989) 00:19:12.333 fused_ordering(990) 00:19:12.333 fused_ordering(991) 00:19:12.333 fused_ordering(992) 00:19:12.333 fused_ordering(993) 00:19:12.333 fused_ordering(994) 00:19:12.333 fused_ordering(995) 00:19:12.333 fused_ordering(996) 00:19:12.334 fused_ordering(997) 00:19:12.334 fused_ordering(998) 00:19:12.334 fused_ordering(999) 00:19:12.334 fused_ordering(1000) 00:19:12.334 fused_ordering(1001) 00:19:12.334 fused_ordering(1002) 00:19:12.334 fused_ordering(1003) 00:19:12.334 fused_ordering(1004) 00:19:12.334 fused_ordering(1005) 00:19:12.334 fused_ordering(1006) 00:19:12.334 fused_ordering(1007) 00:19:12.334 fused_ordering(1008) 00:19:12.334 fused_ordering(1009) 00:19:12.334 fused_ordering(1010) 00:19:12.334 fused_ordering(1011) 00:19:12.334 fused_ordering(1012) 00:19:12.334 fused_ordering(1013) 00:19:12.334 fused_ordering(1014) 00:19:12.334 fused_ordering(1015) 00:19:12.334 fused_ordering(1016) 00:19:12.334 fused_ordering(1017) 00:19:12.334 fused_ordering(1018) 00:19:12.334 fused_ordering(1019) 00:19:12.334 fused_ordering(1020) 00:19:12.334 fused_ordering(1021) 00:19:12.334 fused_ordering(1022) 00:19:12.334 fused_ordering(1023) 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.334 rmmod nvme_tcp 00:19:12.334 rmmod nvme_fabrics 00:19:12.334 rmmod nvme_keyring 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2063755 ']' 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2063755 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2063755 ']' 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2063755 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.334 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063755 00:19:12.593 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.593 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.593 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063755' 00:19:12.593 killing process with pid 2063755 00:19:12.593 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2063755 00:19:12.593 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2063755 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.852 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.391 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:15.391 00:19:15.391 real 0m12.520s 00:19:15.391 user 0m10.681s 00:19:15.391 sys 0m6.087s 00:19:15.391 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.391 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:15.391 ************************************ 00:19:15.391 END TEST nvmf_fused_ordering 00:19:15.391 ************************************ 00:19:15.391 10:29:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:15.391 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.392 ************************************ 00:19:15.392 START TEST nvmf_ns_masking 00:19:15.392 ************************************ 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:15.392 * Looking for test storage... 00:19:15.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:15.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.392 --rc genhtml_branch_coverage=1 00:19:15.392 --rc genhtml_function_coverage=1 00:19:15.392 --rc genhtml_legend=1 00:19:15.392 --rc geninfo_all_blocks=1 00:19:15.392 --rc geninfo_unexecuted_blocks=1 00:19:15.392 00:19:15.392 ' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:15.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.392 --rc genhtml_branch_coverage=1 00:19:15.392 --rc genhtml_function_coverage=1 00:19:15.392 --rc genhtml_legend=1 00:19:15.392 --rc geninfo_all_blocks=1 00:19:15.392 --rc geninfo_unexecuted_blocks=1 00:19:15.392 00:19:15.392 ' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:15.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.392 --rc genhtml_branch_coverage=1 00:19:15.392 --rc genhtml_function_coverage=1 00:19:15.392 --rc genhtml_legend=1 00:19:15.392 --rc geninfo_all_blocks=1 00:19:15.392 --rc geninfo_unexecuted_blocks=1 00:19:15.392 00:19:15.392 ' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:15.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.392 --rc genhtml_branch_coverage=1 00:19:15.392 --rc genhtml_function_coverage=1 00:19:15.392 --rc genhtml_legend=1 00:19:15.392 --rc geninfo_all_blocks=1 00:19:15.392 --rc geninfo_unexecuted_blocks=1 00:19:15.392 00:19:15.392 ' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.392 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=05e159d4-0e06-4413-aae6-11b7dfbb3228 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b091f193-2952-4c87-a1a0-44a9836d7e13 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2b02791b-58f8-43ff-8efc-0e14313ccaf7 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.393 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:18.684 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:18.684 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:18.684 Found net devices under 0000:84:00.0: cvl_0_0 00:19:18.684 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:18.685 Found net devices under 0000:84:00.1: cvl_0_1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:18.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:19:18.685 00:19:18.685 --- 10.0.0.2 ping statistics --- 00:19:18.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.685 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:19:18.685 00:19:18.685 --- 10.0.0.1 ping statistics --- 00:19:18.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.685 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2066755 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2066755 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2066755 ']' 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.685 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:18.685 [2024-12-09 10:30:02.949592] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:19:18.685 [2024-12-09 10:30:02.949677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.685 [2024-12-09 10:30:03.091574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.685 [2024-12-09 10:30:03.208325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.685 [2024-12-09 10:30:03.208436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.685 [2024-12-09 10:30:03.208473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.685 [2024-12-09 10:30:03.208512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.685 [2024-12-09 10:30:03.208538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.685 [2024-12-09 10:30:03.209929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.945 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.945 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:18.945 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.945 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.945 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:19.207 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.207 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:19.777 [2024-12-09 10:30:04.314681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.777 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:19.777 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:19.777 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:20.347 Malloc1 00:19:20.347 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:21.287 Malloc2 00:19:21.287 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:21.855 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:22.425 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.993 [2024-12-09 10:30:07.343082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b02791b-58f8-43ff-8efc-0e14313ccaf7 -a 10.0.0.2 -s 4420 -i 4 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:22.993 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:24.899 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.158 [ 0]:0x1 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f4c50e7dda04aaebcdd8e6bc1b1b904 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f4c50e7dda04aaebcdd8e6bc1b1b904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.158 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.725 [ 0]:0x1 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f4c50e7dda04aaebcdd8e6bc1b1b904 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f4c50e7dda04aaebcdd8e6bc1b1b904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.725 [ 1]:0x2 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:25.725 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:25.985 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.551 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:26.811 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:26.811 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b02791b-58f8-43ff-8efc-0e14313ccaf7 -a 10.0.0.2 -s 4420 -i 4 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:27.071 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.983 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.242 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:29.242 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.242 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:29.242 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.243 [ 0]:0x2 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.243 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.814 [ 0]:0x1 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f4c50e7dda04aaebcdd8e6bc1b1b904 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f4c50e7dda04aaebcdd8e6bc1b1b904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.814 [ 1]:0x2 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.814 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.407 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:30.666 [ 0]:0x2 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:30.666 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.925 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:31.184 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:31.184 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b02791b-58f8-43ff-8efc-0e14313ccaf7 -a 10.0.0.2 -s 4420 -i 4 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:31.445 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:33.356 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:33.615 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:33.616 [ 0]:0x1 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f4c50e7dda04aaebcdd8e6bc1b1b904 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f4c50e7dda04aaebcdd8e6bc1b1b904 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:33.616 [ 1]:0x2 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.616 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:34.554 [ 0]:0x2 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:34.554 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:34.813 [2024-12-09 10:30:19.270824] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:34.813 request: 00:19:34.813 { 00:19:34.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.813 "nsid": 2, 00:19:34.813 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.813 "method": "nvmf_ns_remove_host", 00:19:34.813 "req_id": 1 00:19:34.813 } 00:19:34.813 Got JSON-RPC error response 00:19:34.813 response: 00:19:34.813 { 00:19:34.813 "code": -32602, 00:19:34.813 "message": "Invalid parameters" 00:19:34.813 } 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.813 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:34.814 [ 0]:0x2 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8bebdfc19ca841349bb2a8ee2c83cc67 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8bebdfc19ca841349bb2a8ee2c83cc67 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:34.814 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:35.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2069275 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2069275 /var/tmp/host.sock 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2069275 ']' 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:35.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.072 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:35.072 [2024-12-09 10:30:19.649715] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:19:35.072 [2024-12-09 10:30:19.649825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069275 ] 00:19:35.331 [2024-12-09 10:30:19.775433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.331 [2024-12-09 10:30:19.887289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.712 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.712 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:36.712 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.971 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:37.538 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 05e159d4-0e06-4413-aae6-11b7dfbb3228 00:19:37.538 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:37.538 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 05E159D40E064413AAE611B7DFBB3228 -i 00:19:37.796 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b091f193-2952-4c87-a1a0-44a9836d7e13 00:19:37.796 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:37.796 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B091F19329524C87A1A044A9836D7E13 -i 00:19:38.053 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:38.618 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:38.875 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:38.875 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:39.443 nvme0n1 00:19:39.443 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:39.443 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:40.012 nvme1n2 00:19:40.012 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:40.012 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:40.012 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:40.012 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:40.012 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:40.271 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:40.271 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:40.271 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:40.271 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:40.837 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 05e159d4-0e06-4413-aae6-11b7dfbb3228 == \0\5\e\1\5\9\d\4\-\0\e\0\6\-\4\4\1\3\-\a\a\e\6\-\1\1\b\7\d\f\b\b\3\2\2\8 ]] 00:19:40.837 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:40.837 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:40.837 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:41.095 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b091f193-2952-4c87-a1a0-44a9836d7e13 == \b\0\9\1\f\1\9\3\-\2\9\5\2\-\4\c\8\7\-\a\1\a\0\-\4\4\a\9\8\3\6\d\7\e\1\3 ]] 00:19:41.095 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:41.662 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 05e159d4-0e06-4413-aae6-11b7dfbb3228 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 05E159D40E064413AAE611B7DFBB3228 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 05E159D40E064413AAE611B7DFBB3228 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:41.920 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 05E159D40E064413AAE611B7DFBB3228 00:19:42.178 [2024-12-09 10:30:26.764582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:42.178 [2024-12-09 10:30:26.764674] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:42.178 [2024-12-09 10:30:26.764713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.178 request: 00:19:42.178 { 00:19:42.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.178 "namespace": { 00:19:42.178 "bdev_name": "invalid", 00:19:42.178 "nsid": 1, 00:19:42.178 "nguid": "05E159D40E064413AAE611B7DFBB3228", 00:19:42.178 "no_auto_visible": false, 00:19:42.178 "hide_metadata": false 00:19:42.178 }, 00:19:42.178 "method": "nvmf_subsystem_add_ns", 00:19:42.178 "req_id": 1 00:19:42.178 } 00:19:42.178 Got JSON-RPC error response 00:19:42.178 response: 00:19:42.178 { 00:19:42.178 "code": -32602, 00:19:42.178 "message": "Invalid parameters" 00:19:42.178 } 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 05e159d4-0e06-4413-aae6-11b7dfbb3228 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:42.178 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 05E159D40E064413AAE611B7DFBB3228 -i 00:19:42.745 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:44.650 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:44.650 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:44.650 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2069275 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2069275 ']' 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2069275 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.908 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069275 00:19:45.167 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.167 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.167 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069275' 00:19:45.167 killing process with pid 2069275 00:19:45.167 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2069275 00:19:45.167 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2069275 00:19:45.736 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.305 rmmod nvme_tcp 00:19:46.305 rmmod nvme_fabrics 00:19:46.305 rmmod nvme_keyring 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:46.305 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2066755 ']' 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2066755 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2066755 ']' 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2066755 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.565 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2066755 00:19:46.565 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.565 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.565 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2066755' 00:19:46.565 killing process with pid 2066755 00:19:46.565 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2066755 00:19:46.565 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2066755 00:19:46.824 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.824 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:46.824 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:46.824 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.825 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.371 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:49.371 00:19:49.371 real 0m34.023s 00:19:49.371 user 0m53.481s 00:19:49.371 sys 0m6.840s 00:19:49.371 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.371 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:49.371 ************************************ 00:19:49.371 END TEST nvmf_ns_masking 00:19:49.371 ************************************ 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.372 ************************************ 00:19:49.372 START TEST nvmf_nvme_cli 00:19:49.372 ************************************ 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:49.372 * Looking for test storage... 00:19:49.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.372 --rc genhtml_branch_coverage=1 00:19:49.372 --rc genhtml_function_coverage=1 00:19:49.372 --rc genhtml_legend=1 00:19:49.372 --rc geninfo_all_blocks=1 00:19:49.372 --rc geninfo_unexecuted_blocks=1 00:19:49.372 00:19:49.372 ' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.372 --rc genhtml_branch_coverage=1 00:19:49.372 --rc genhtml_function_coverage=1 00:19:49.372 --rc genhtml_legend=1 00:19:49.372 --rc geninfo_all_blocks=1 00:19:49.372 --rc geninfo_unexecuted_blocks=1 00:19:49.372 00:19:49.372 ' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.372 --rc genhtml_branch_coverage=1 00:19:49.372 --rc genhtml_function_coverage=1 00:19:49.372 --rc genhtml_legend=1 00:19:49.372 --rc geninfo_all_blocks=1 00:19:49.372 --rc geninfo_unexecuted_blocks=1 00:19:49.372 00:19:49.372 ' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:49.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.372 --rc genhtml_branch_coverage=1 00:19:49.372 --rc genhtml_function_coverage=1 00:19:49.372 --rc genhtml_legend=1 00:19:49.372 --rc geninfo_all_blocks=1 00:19:49.372 --rc geninfo_unexecuted_blocks=1 00:19:49.372 00:19:49.372 ' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.372 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:49.373 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.665 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:52.666 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:52.666 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:52.666 Found net devices under 0000:84:00.0: cvl_0_0 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:52.666 Found net devices under 0000:84:00.1: cvl_0_1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:19:52.666 00:19:52.666 --- 10.0.0.2 ping statistics --- 00:19:52.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.666 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:52.666 00:19:52.666 --- 10.0.0.1 ping statistics --- 00:19:52.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.666 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.666 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.667 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2072653 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2072653 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2072653 ']' 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.667 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:52.667 [2024-12-09 10:30:37.083349] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:19:52.667 [2024-12-09 10:30:37.083448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.667 [2024-12-09 10:30:37.214148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.926 [2024-12-09 10:30:37.337774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.926 [2024-12-09 10:30:37.337884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.926 [2024-12-09 10:30:37.337920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.926 [2024-12-09 10:30:37.337956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.926 [2024-12-09 10:30:37.337981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.926 [2024-12-09 10:30:37.341552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.926 [2024-12-09 10:30:37.341665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.926 [2024-12-09 10:30:37.341758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.926 [2024-12-09 10:30:37.341763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.863 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.863 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:53.863 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.863 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.863 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 [2024-12-09 10:30:38.418854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 Malloc0 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 Malloc1 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.864 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:54.122 [2024-12-09 10:30:38.518904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:19:54.122 00:19:54.122 Discovery Log Number of Records 2, Generation counter 2 00:19:54.122 =====Discovery Log Entry 0====== 00:19:54.122 trtype: tcp 00:19:54.122 adrfam: ipv4 00:19:54.122 subtype: current discovery subsystem 00:19:54.122 treq: not required 00:19:54.122 portid: 0 00:19:54.122 trsvcid: 4420 00:19:54.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:54.122 traddr: 10.0.0.2 00:19:54.122 eflags: explicit discovery connections, duplicate discovery information 00:19:54.122 sectype: none 00:19:54.122 =====Discovery Log Entry 1====== 00:19:54.122 trtype: tcp 00:19:54.122 adrfam: ipv4 00:19:54.122 subtype: nvme subsystem 00:19:54.122 treq: not required 00:19:54.122 portid: 0 00:19:54.122 trsvcid: 4420 00:19:54.122 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:54.122 traddr: 10.0.0.2 00:19:54.122 eflags: none 00:19:54.122 sectype: none 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:54.122 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:55.060 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:57.096 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:57.097 /dev/nvme0n2 ]] 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:57.097 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:57.355 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.356 rmmod nvme_tcp 00:19:57.356 rmmod nvme_fabrics 00:19:57.356 rmmod nvme_keyring 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2072653 ']' 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2072653 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2072653 ']' 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2072653 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:57.356 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.356 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2072653 00:19:57.626 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.626 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.626 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2072653' 00:19:57.626 killing process with pid 2072653 00:19:57.626 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2072653 00:19:57.626 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2072653 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.885 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.420 00:20:00.420 real 0m10.913s 00:20:00.420 user 0m21.348s 00:20:00.420 sys 0m3.436s 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:00.420 ************************************ 00:20:00.420 END TEST nvmf_nvme_cli 00:20:00.420 ************************************ 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.420 ************************************ 00:20:00.420 START TEST nvmf_vfio_user 00:20:00.420 ************************************ 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:20:00.420 * Looking for test storage... 00:20:00.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:00.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.420 --rc genhtml_branch_coverage=1 00:20:00.420 --rc genhtml_function_coverage=1 00:20:00.420 --rc genhtml_legend=1 00:20:00.420 --rc geninfo_all_blocks=1 00:20:00.420 --rc geninfo_unexecuted_blocks=1 00:20:00.420 00:20:00.420 ' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:00.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.420 --rc genhtml_branch_coverage=1 00:20:00.420 --rc genhtml_function_coverage=1 00:20:00.420 --rc genhtml_legend=1 00:20:00.420 --rc geninfo_all_blocks=1 00:20:00.420 --rc geninfo_unexecuted_blocks=1 00:20:00.420 00:20:00.420 ' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:00.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.420 --rc genhtml_branch_coverage=1 00:20:00.420 --rc genhtml_function_coverage=1 00:20:00.420 --rc genhtml_legend=1 00:20:00.420 --rc geninfo_all_blocks=1 00:20:00.420 --rc geninfo_unexecuted_blocks=1 00:20:00.420 00:20:00.420 ' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:00.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.420 --rc genhtml_branch_coverage=1 00:20:00.420 --rc genhtml_function_coverage=1 00:20:00.420 --rc genhtml_legend=1 00:20:00.420 --rc geninfo_all_blocks=1 00:20:00.420 --rc geninfo_unexecuted_blocks=1 00:20:00.420 00:20:00.420 ' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.420 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2073668 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2073668' 00:20:00.421 Process pid: 2073668 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2073668 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2073668 ']' 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.421 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:00.421 [2024-12-09 10:30:44.908655] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:20:00.421 [2024-12-09 10:30:44.908861] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.421 [2024-12-09 10:30:45.069107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.679 [2024-12-09 10:30:45.172871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.679 [2024-12-09 10:30:45.172993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.679 [2024-12-09 10:30:45.173032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.679 [2024-12-09 10:30:45.173070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.679 [2024-12-09 10:30:45.173097] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.679 [2024-12-09 10:30:45.176334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.679 [2024-12-09 10:30:45.176392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.679 [2024-12-09 10:30:45.176491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.679 [2024-12-09 10:30:45.176494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.679 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.679 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:00.679 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:02.054 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:20:02.312 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:02.312 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:02.312 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:02.312 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:02.312 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:03.248 Malloc1 00:20:03.248 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:03.814 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:04.383 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:04.949 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:04.949 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:04.949 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:05.516 Malloc2 00:20:05.516 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:06.080 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:06.338 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:06.902 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:06.902 [2024-12-09 10:30:51.341830] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:20:06.902 [2024-12-09 10:30:51.341884] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074470 ] 00:20:06.902 [2024-12-09 10:30:51.397871] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:06.902 [2024-12-09 10:30:51.408207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:06.902 [2024-12-09 10:30:51.408240] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd19312000 00:20:06.902 [2024-12-09 10:30:51.409205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.410202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.411204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.412212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.413218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.414220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.415227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.416228] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:06.902 [2024-12-09 10:30:51.417234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:06.902 [2024-12-09 10:30:51.417255] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd19307000 00:20:06.902 [2024-12-09 10:30:51.418335] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:06.902 [2024-12-09 10:30:51.432446] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:06.902 [2024-12-09 10:30:51.432490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:06.902 [2024-12-09 10:30:51.441383] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:06.902 [2024-12-09 10:30:51.441445] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:06.902 [2024-12-09 10:30:51.441546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:06.902 [2024-12-09 10:30:51.441580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:06.902 [2024-12-09 10:30:51.441591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:06.902 [2024-12-09 10:30:51.442372] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:06.903 [2024-12-09 10:30:51.442399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:06.903 [2024-12-09 10:30:51.442413] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:06.903 [2024-12-09 10:30:51.443375] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:06.903 [2024-12-09 10:30:51.443395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:06.903 [2024-12-09 10:30:51.443409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.444383] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:06.903 [2024-12-09 10:30:51.444402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.445387] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:06.903 [2024-12-09 10:30:51.445412] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:06.903 [2024-12-09 10:30:51.445422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.445433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.445543] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:06.903 [2024-12-09 10:30:51.445551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.445561] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:06.903 [2024-12-09 10:30:51.446407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:06.903 [2024-12-09 10:30:51.447400] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:06.903 [2024-12-09 10:30:51.448405] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:06.903 [2024-12-09 10:30:51.449404] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:06.903 [2024-12-09 10:30:51.449536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:06.903 [2024-12-09 10:30:51.450425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:06.903 [2024-12-09 10:30:51.450444] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:06.903 [2024-12-09 10:30:51.450453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:06.903 [2024-12-09 10:30:51.450491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450528] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:06.903 [2024-12-09 10:30:51.450539] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:06.903 [2024-12-09 10:30:51.450546] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.450567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.450643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.450668] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:06.903 [2024-12-09 10:30:51.450677] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:06.903 [2024-12-09 10:30:51.450685] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:06.903 [2024-12-09 10:30:51.450694] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:06.903 [2024-12-09 10:30:51.450728] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:06.903 [2024-12-09 10:30:51.450739] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:06.903 [2024-12-09 10:30:51.450747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.450801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.450818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.903 [2024-12-09 10:30:51.450831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.903 [2024-12-09 10:30:51.450844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.903 [2024-12-09 10:30:51.450856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.903 [2024-12-09 10:30:51.450864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.450915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.450926] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:06.903 [2024-12-09 10:30:51.450935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.450970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.450991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451106] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:06.903 [2024-12-09 10:30:51.451114] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:06.903 [2024-12-09 10:30:51.451120] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451166] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:06.903 [2024-12-09 10:30:51.451190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451218] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:06.903 [2024-12-09 10:30:51.451226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:06.903 [2024-12-09 10:30:51.451232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451330] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:06.903 [2024-12-09 10:30:51.451337] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:06.903 [2024-12-09 10:30:51.451343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451448] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:06.903 [2024-12-09 10:30:51.451455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:06.903 [2024-12-09 10:30:51.451468] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:06.903 [2024-12-09 10:30:51.451498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451631] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:06.903 [2024-12-09 10:30:51.451641] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:06.903 [2024-12-09 10:30:51.451647] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:06.903 [2024-12-09 10:30:51.451653] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:06.903 [2024-12-09 10:30:51.451658] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:06.903 [2024-12-09 10:30:51.451667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:06.903 [2024-12-09 10:30:51.451679] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:06.903 [2024-12-09 10:30:51.451687] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:06.903 [2024-12-09 10:30:51.451693] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451738] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:06.903 [2024-12-09 10:30:51.451746] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:06.903 [2024-12-09 10:30:51.451752] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451774] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:06.903 [2024-12-09 10:30:51.451782] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:06.903 [2024-12-09 10:30:51.451788] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:06.903 [2024-12-09 10:30:51.451797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:06.903 [2024-12-09 10:30:51.451809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:06.903 [2024-12-09 10:30:51.451873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:06.903 ===================================================== 00:20:06.903 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:06.903 ===================================================== 00:20:06.903 Controller Capabilities/Features 00:20:06.903 ================================ 00:20:06.903 Vendor ID: 4e58 00:20:06.903 Subsystem Vendor ID: 4e58 00:20:06.903 Serial Number: SPDK1 00:20:06.903 Model Number: SPDK bdev Controller 00:20:06.903 Firmware Version: 25.01 00:20:06.903 Recommended Arb Burst: 6 00:20:06.903 IEEE OUI Identifier: 8d 6b 50 00:20:06.903 Multi-path I/O 00:20:06.903 May have multiple subsystem ports: Yes 00:20:06.903 May have multiple controllers: Yes 00:20:06.903 Associated with SR-IOV VF: No 00:20:06.903 Max Data Transfer Size: 131072 00:20:06.903 Max Number of Namespaces: 32 00:20:06.903 Max Number of I/O Queues: 127 00:20:06.903 NVMe Specification Version (VS): 1.3 00:20:06.903 NVMe Specification Version (Identify): 1.3 00:20:06.903 Maximum Queue Entries: 256 00:20:06.903 Contiguous Queues Required: Yes 00:20:06.903 Arbitration Mechanisms Supported 00:20:06.903 Weighted Round Robin: Not Supported 00:20:06.903 Vendor Specific: Not Supported 00:20:06.903 Reset Timeout: 15000 ms 00:20:06.903 Doorbell Stride: 4 bytes 00:20:06.903 NVM Subsystem Reset: Not Supported 00:20:06.903 Command Sets Supported 00:20:06.903 NVM Command Set: Supported 00:20:06.903 Boot Partition: Not Supported 00:20:06.903 Memory Page Size Minimum: 4096 bytes 00:20:06.903 Memory Page Size Maximum: 4096 bytes 00:20:06.903 Persistent Memory Region: Not Supported 00:20:06.903 Optional Asynchronous Events Supported 00:20:06.903 Namespace Attribute Notices: Supported 00:20:06.903 Firmware Activation Notices: Not Supported 00:20:06.903 ANA Change Notices: Not Supported 00:20:06.903 PLE Aggregate Log Change Notices: Not Supported 00:20:06.903 LBA Status Info Alert Notices: Not Supported 00:20:06.903 EGE Aggregate Log Change Notices: Not Supported 00:20:06.903 Normal NVM Subsystem Shutdown event: Not Supported 00:20:06.904 Zone Descriptor Change Notices: Not Supported 00:20:06.904 Discovery Log Change Notices: Not Supported 00:20:06.904 Controller Attributes 00:20:06.904 128-bit Host Identifier: Supported 00:20:06.904 Non-Operational Permissive Mode: Not Supported 00:20:06.904 NVM Sets: Not Supported 00:20:06.904 Read Recovery Levels: Not Supported 00:20:06.904 Endurance Groups: Not Supported 00:20:06.904 Predictable Latency Mode: Not Supported 00:20:06.904 Traffic Based Keep ALive: Not Supported 00:20:06.904 Namespace Granularity: Not Supported 00:20:06.904 SQ Associations: Not Supported 00:20:06.904 UUID List: Not Supported 00:20:06.904 Multi-Domain Subsystem: Not Supported 00:20:06.904 Fixed Capacity Management: Not Supported 00:20:06.904 Variable Capacity Management: Not Supported 00:20:06.904 Delete Endurance Group: Not Supported 00:20:06.904 Delete NVM Set: Not Supported 00:20:06.904 Extended LBA Formats Supported: Not Supported 00:20:06.904 Flexible Data Placement Supported: Not Supported 00:20:06.904 00:20:06.904 Controller Memory Buffer Support 00:20:06.904 ================================ 00:20:06.904 Supported: No 00:20:06.904 00:20:06.904 Persistent Memory Region Support 00:20:06.904 ================================ 00:20:06.904 Supported: No 00:20:06.904 00:20:06.904 Admin Command Set Attributes 00:20:06.904 ============================ 00:20:06.904 Security Send/Receive: Not Supported 00:20:06.904 Format NVM: Not Supported 00:20:06.904 Firmware Activate/Download: Not Supported 00:20:06.904 Namespace Management: Not Supported 00:20:06.904 Device Self-Test: Not Supported 00:20:06.904 Directives: Not Supported 00:20:06.904 NVMe-MI: Not Supported 00:20:06.904 Virtualization Management: Not Supported 00:20:06.904 Doorbell Buffer Config: Not Supported 00:20:06.904 Get LBA Status Capability: Not Supported 00:20:06.904 Command & Feature Lockdown Capability: Not Supported 00:20:06.904 Abort Command Limit: 4 00:20:06.904 Async Event Request Limit: 4 00:20:06.904 Number of Firmware Slots: N/A 00:20:06.904 Firmware Slot 1 Read-Only: N/A 00:20:06.904 Firmware Activation Without Reset: N/A 00:20:06.904 Multiple Update Detection Support: N/A 00:20:06.904 Firmware Update Granularity: No Information Provided 00:20:06.904 Per-Namespace SMART Log: No 00:20:06.904 Asymmetric Namespace Access Log Page: Not Supported 00:20:06.904 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:06.904 Command Effects Log Page: Supported 00:20:06.904 Get Log Page Extended Data: Supported 00:20:06.904 Telemetry Log Pages: Not Supported 00:20:06.904 Persistent Event Log Pages: Not Supported 00:20:06.904 Supported Log Pages Log Page: May Support 00:20:06.904 Commands Supported & Effects Log Page: Not Supported 00:20:06.904 Feature Identifiers & Effects Log Page:May Support 00:20:06.904 NVMe-MI Commands & Effects Log Page: May Support 00:20:06.904 Data Area 4 for Telemetry Log: Not Supported 00:20:06.904 Error Log Page Entries Supported: 128 00:20:06.904 Keep Alive: Supported 00:20:06.904 Keep Alive Granularity: 10000 ms 00:20:06.904 00:20:06.904 NVM Command Set Attributes 00:20:06.904 ========================== 00:20:06.904 Submission Queue Entry Size 00:20:06.904 Max: 64 00:20:06.904 Min: 64 00:20:06.904 Completion Queue Entry Size 00:20:06.904 Max: 16 00:20:06.904 Min: 16 00:20:06.904 Number of Namespaces: 32 00:20:06.904 Compare Command: Supported 00:20:06.904 Write Uncorrectable Command: Not Supported 00:20:06.904 Dataset Management Command: Supported 00:20:06.904 Write Zeroes Command: Supported 00:20:06.904 Set Features Save Field: Not Supported 00:20:06.904 Reservations: Not Supported 00:20:06.904 Timestamp: Not Supported 00:20:06.904 Copy: Supported 00:20:06.904 Volatile Write Cache: Present 00:20:06.904 Atomic Write Unit (Normal): 1 00:20:06.904 Atomic Write Unit (PFail): 1 00:20:06.904 Atomic Compare & Write Unit: 1 00:20:06.904 Fused Compare & Write: Supported 00:20:06.904 Scatter-Gather List 00:20:06.904 SGL Command Set: Supported (Dword aligned) 00:20:06.904 SGL Keyed: Not Supported 00:20:06.904 SGL Bit Bucket Descriptor: Not Supported 00:20:06.904 SGL Metadata Pointer: Not Supported 00:20:06.904 Oversized SGL: Not Supported 00:20:06.904 SGL Metadata Address: Not Supported 00:20:06.904 SGL Offset: Not Supported 00:20:06.904 Transport SGL Data Block: Not Supported 00:20:06.904 Replay Protected Memory Block: Not Supported 00:20:06.904 00:20:06.904 Firmware Slot Information 00:20:06.904 ========================= 00:20:06.904 Active slot: 1 00:20:06.904 Slot 1 Firmware Revision: 25.01 00:20:06.904 00:20:06.904 00:20:06.904 Commands Supported and Effects 00:20:06.904 ============================== 00:20:06.904 Admin Commands 00:20:06.904 -------------- 00:20:06.904 Get Log Page (02h): Supported 00:20:06.904 Identify (06h): Supported 00:20:06.904 Abort (08h): Supported 00:20:06.904 Set Features (09h): Supported 00:20:06.904 Get Features (0Ah): Supported 00:20:06.904 Asynchronous Event Request (0Ch): Supported 00:20:06.904 Keep Alive (18h): Supported 00:20:06.904 I/O Commands 00:20:06.904 ------------ 00:20:06.904 Flush (00h): Supported LBA-Change 00:20:06.904 Write (01h): Supported LBA-Change 00:20:06.904 Read (02h): Supported 00:20:06.904 Compare (05h): Supported 00:20:06.904 Write Zeroes (08h): Supported LBA-Change 00:20:06.904 Dataset Management (09h): Supported LBA-Change 00:20:06.904 Copy (19h): Supported LBA-Change 00:20:06.904 00:20:06.904 Error Log 00:20:06.904 ========= 00:20:06.904 00:20:06.904 Arbitration 00:20:06.904 =========== 00:20:06.904 Arbitration Burst: 1 00:20:06.904 00:20:06.904 Power Management 00:20:06.904 ================ 00:20:06.904 Number of Power States: 1 00:20:06.904 Current Power State: Power State #0 00:20:06.904 Power State #0: 00:20:06.904 Max Power: 0.00 W 00:20:06.904 Non-Operational State: Operational 00:20:06.904 Entry Latency: Not Reported 00:20:06.904 Exit Latency: Not Reported 00:20:06.904 Relative Read Throughput: 0 00:20:06.904 Relative Read Latency: 0 00:20:06.904 Relative Write Throughput: 0 00:20:06.904 Relative Write Latency: 0 00:20:06.904 Idle Power: Not Reported 00:20:06.904 Active Power: Not Reported 00:20:06.904 Non-Operational Permissive Mode: Not Supported 00:20:06.904 00:20:06.904 Health Information 00:20:06.904 ================== 00:20:06.904 Critical Warnings: 00:20:06.904 Available Spare Space: OK 00:20:06.904 Temperature: OK 00:20:06.904 Device Reliability: OK 00:20:06.904 Read Only: No 00:20:06.904 Volatile Memory Backup: OK 00:20:06.904 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:06.904 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:06.904 Available Spare: 0% 00:20:06.904 Available Sp[2024-12-09 10:30:51.451991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:06.904 [2024-12-09 10:30:51.452008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:06.904 [2024-12-09 10:30:51.452053] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:06.904 [2024-12-09 10:30:51.452087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.904 [2024-12-09 10:30:51.452098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.904 [2024-12-09 10:30:51.452108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.904 [2024-12-09 10:30:51.452117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.904 [2024-12-09 10:30:51.452439] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:06.904 [2024-12-09 10:30:51.452461] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:06.904 [2024-12-09 10:30:51.453435] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:06.904 [2024-12-09 10:30:51.453510] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:06.904 [2024-12-09 10:30:51.453532] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:06.904 [2024-12-09 10:30:51.454446] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:06.904 [2024-12-09 10:30:51.454469] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:06.904 [2024-12-09 10:30:51.454525] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:06.904 [2024-12-09 10:30:51.460733] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:06.904 are Threshold: 0% 00:20:06.904 Life Percentage Used: 0% 00:20:06.904 Data Units Read: 0 00:20:06.904 Data Units Written: 0 00:20:06.904 Host Read Commands: 0 00:20:06.904 Host Write Commands: 0 00:20:06.904 Controller Busy Time: 0 minutes 00:20:06.904 Power Cycles: 0 00:20:06.904 Power On Hours: 0 hours 00:20:06.904 Unsafe Shutdowns: 0 00:20:06.904 Unrecoverable Media Errors: 0 00:20:06.904 Lifetime Error Log Entries: 0 00:20:06.904 Warning Temperature Time: 0 minutes 00:20:06.904 Critical Temperature Time: 0 minutes 00:20:06.904 00:20:06.904 Number of Queues 00:20:06.904 ================ 00:20:06.904 Number of I/O Submission Queues: 127 00:20:06.904 Number of I/O Completion Queues: 127 00:20:06.904 00:20:06.904 Active Namespaces 00:20:06.904 ================= 00:20:06.904 Namespace ID:1 00:20:06.904 Error Recovery Timeout: Unlimited 00:20:06.904 Command Set Identifier: NVM (00h) 00:20:06.904 Deallocate: Supported 00:20:06.904 Deallocated/Unwritten Error: Not Supported 00:20:06.904 Deallocated Read Value: Unknown 00:20:06.904 Deallocate in Write Zeroes: Not Supported 00:20:06.904 Deallocated Guard Field: 0xFFFF 00:20:06.904 Flush: Supported 00:20:06.904 Reservation: Supported 00:20:06.904 Namespace Sharing Capabilities: Multiple Controllers 00:20:06.904 Size (in LBAs): 131072 (0GiB) 00:20:06.904 Capacity (in LBAs): 131072 (0GiB) 00:20:06.904 Utilization (in LBAs): 131072 (0GiB) 00:20:06.904 NGUID: 093DE36957D74FDFBAF8AEA19712C601 00:20:06.904 UUID: 093de369-57d7-4fdf-baf8-aea19712c601 00:20:06.904 Thin Provisioning: Not Supported 00:20:06.904 Per-NS Atomic Units: Yes 00:20:06.904 Atomic Boundary Size (Normal): 0 00:20:06.904 Atomic Boundary Size (PFail): 0 00:20:06.904 Atomic Boundary Offset: 0 00:20:06.904 Maximum Single Source Range Length: 65535 00:20:06.904 Maximum Copy Length: 65535 00:20:06.904 Maximum Source Range Count: 1 00:20:06.904 NGUID/EUI64 Never Reused: No 00:20:06.904 Namespace Write Protected: No 00:20:06.904 Number of LBA Formats: 1 00:20:06.904 Current LBA Format: LBA Format #00 00:20:06.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:06.904 00:20:06.904 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:07.162 [2024-12-09 10:30:51.723658] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:12.442 Initializing NVMe Controllers 00:20:12.442 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:12.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:12.442 Initialization complete. Launching workers. 00:20:12.442 ======================================================== 00:20:12.442 Latency(us) 00:20:12.442 Device Information : IOPS MiB/s Average min max 00:20:12.442 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30952.03 120.91 4134.38 1237.79 8336.09 00:20:12.442 ======================================================== 00:20:12.442 Total : 30952.03 120.91 4134.38 1237.79 8336.09 00:20:12.442 00:20:12.442 [2024-12-09 10:30:56.741183] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:12.442 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:12.442 [2024-12-09 10:30:57.003435] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:17.730 Initializing NVMe Controllers 00:20:17.730 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:17.730 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:17.730 Initialization complete. Launching workers. 00:20:17.730 ======================================================== 00:20:17.730 Latency(us) 00:20:17.730 Device Information : IOPS MiB/s Average min max 00:20:17.730 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7995.62 4976.25 15973.74 00:20:17.730 ======================================================== 00:20:17.730 Total : 16025.60 62.60 7995.62 4976.25 15973.74 00:20:17.730 00:20:17.730 [2024-12-09 10:31:02.039681] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:17.730 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:17.730 [2024-12-09 10:31:02.323006] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:23.027 [2024-12-09 10:31:07.407138] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:23.027 Initializing NVMe Controllers 00:20:23.027 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:23.027 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:23.027 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:23.027 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:23.027 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:23.027 Initialization complete. Launching workers. 00:20:23.027 Starting thread on core 2 00:20:23.027 Starting thread on core 3 00:20:23.027 Starting thread on core 1 00:20:23.027 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:23.285 [2024-12-09 10:31:07.753726] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:26.573 [2024-12-09 10:31:10.829549] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:26.573 Initializing NVMe Controllers 00:20:26.573 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.573 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.573 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:26.573 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:26.573 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:26.573 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:26.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:26.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:26.573 Initialization complete. Launching workers. 00:20:26.573 Starting thread on core 1 with urgent priority queue 00:20:26.573 Starting thread on core 2 with urgent priority queue 00:20:26.573 Starting thread on core 3 with urgent priority queue 00:20:26.573 Starting thread on core 0 with urgent priority queue 00:20:26.573 SPDK bdev Controller (SPDK1 ) core 0: 5331.33 IO/s 18.76 secs/100000 ios 00:20:26.573 SPDK bdev Controller (SPDK1 ) core 1: 4980.33 IO/s 20.08 secs/100000 ios 00:20:26.573 SPDK bdev Controller (SPDK1 ) core 2: 4616.00 IO/s 21.66 secs/100000 ios 00:20:26.573 SPDK bdev Controller (SPDK1 ) core 3: 6070.67 IO/s 16.47 secs/100000 ios 00:20:26.573 ======================================================== 00:20:26.573 00:20:26.573 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:26.573 [2024-12-09 10:31:11.170267] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:26.573 Initializing NVMe Controllers 00:20:26.573 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.573 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:26.573 Namespace ID: 1 size: 0GB 00:20:26.573 Initialization complete. 00:20:26.573 INFO: using host memory buffer for IO 00:20:26.573 Hello world! 00:20:26.573 [2024-12-09 10:31:11.203870] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:26.832 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:27.091 [2024-12-09 10:31:11.540205] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:28.030 Initializing NVMe Controllers 00:20:28.030 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:28.030 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:28.030 Initialization complete. Launching workers. 00:20:28.030 submit (in ns) avg, min, max = 9064.3, 3528.9, 4018076.7 00:20:28.030 complete (in ns) avg, min, max = 25107.0, 2072.2, 4060581.1 00:20:28.030 00:20:28.030 Submit histogram 00:20:28.030 ================ 00:20:28.030 Range in us Cumulative Count 00:20:28.030 3.508 - 3.532: 0.0079% ( 1) 00:20:28.030 3.532 - 3.556: 0.0159% ( 1) 00:20:28.030 3.556 - 3.579: 0.2461% ( 29) 00:20:28.030 3.579 - 3.603: 1.4372% ( 150) 00:20:28.030 3.603 - 3.627: 4.6609% ( 406) 00:20:28.030 3.627 - 3.650: 11.5293% ( 865) 00:20:28.030 3.650 - 3.674: 19.4855% ( 1002) 00:20:28.030 3.674 - 3.698: 31.2212% ( 1478) 00:20:28.030 3.698 - 3.721: 41.1069% ( 1245) 00:20:28.030 3.721 - 3.745: 50.3573% ( 1165) 00:20:28.030 3.745 - 3.769: 56.0981% ( 723) 00:20:28.030 3.769 - 3.793: 61.4975% ( 680) 00:20:28.030 3.793 - 3.816: 65.4200% ( 494) 00:20:28.030 3.816 - 3.840: 68.6438% ( 406) 00:20:28.030 3.840 - 3.864: 71.8437% ( 403) 00:20:28.030 3.864 - 3.887: 75.0119% ( 399) 00:20:28.030 3.887 - 3.911: 78.5056% ( 440) 00:20:28.030 3.911 - 3.935: 82.2773% ( 475) 00:20:28.030 3.935 - 3.959: 85.3502% ( 387) 00:20:28.030 3.959 - 3.982: 87.6211% ( 286) 00:20:28.030 3.982 - 4.006: 89.5029% ( 237) 00:20:28.030 4.006 - 4.030: 90.7972% ( 163) 00:20:28.030 4.030 - 4.053: 92.0835% ( 162) 00:20:28.030 4.053 - 4.077: 93.1158% ( 130) 00:20:28.030 4.077 - 4.101: 93.7907% ( 85) 00:20:28.030 4.101 - 4.124: 94.5609% ( 97) 00:20:28.030 4.124 - 4.148: 95.0691% ( 64) 00:20:28.030 4.148 - 4.172: 95.4423% ( 47) 00:20:28.030 4.172 - 4.196: 95.6170% ( 22) 00:20:28.030 4.196 - 4.219: 95.8393% ( 28) 00:20:28.030 4.219 - 4.243: 95.9346% ( 12) 00:20:28.030 4.243 - 4.267: 96.0457% ( 14) 00:20:28.030 4.267 - 4.290: 96.1569% ( 14) 00:20:28.030 4.290 - 4.314: 96.2363% ( 10) 00:20:28.030 4.314 - 4.338: 96.3236% ( 11) 00:20:28.030 4.338 - 4.361: 96.4030% ( 10) 00:20:28.030 4.361 - 4.385: 96.4666% ( 8) 00:20:28.030 4.385 - 4.409: 96.4983% ( 4) 00:20:28.030 4.409 - 4.433: 96.5619% ( 8) 00:20:28.030 4.433 - 4.456: 96.5857% ( 3) 00:20:28.030 4.456 - 4.480: 96.6016% ( 2) 00:20:28.030 4.480 - 4.504: 96.6254% ( 3) 00:20:28.030 4.504 - 4.527: 96.6413% ( 2) 00:20:28.030 4.527 - 4.551: 96.6492% ( 1) 00:20:28.030 4.599 - 4.622: 96.6571% ( 1) 00:20:28.030 4.622 - 4.646: 96.6651% ( 1) 00:20:28.030 4.670 - 4.693: 96.6889% ( 3) 00:20:28.030 4.717 - 4.741: 96.7207% ( 4) 00:20:28.030 4.741 - 4.764: 96.7445% ( 3) 00:20:28.030 4.764 - 4.788: 96.7524% ( 1) 00:20:28.030 4.788 - 4.812: 96.8001% ( 6) 00:20:28.030 4.812 - 4.836: 96.8715% ( 9) 00:20:28.030 4.836 - 4.859: 96.9192% ( 6) 00:20:28.030 4.859 - 4.883: 97.0145% ( 12) 00:20:28.030 4.883 - 4.907: 97.0462% ( 4) 00:20:28.030 4.907 - 4.930: 97.0939% ( 6) 00:20:28.030 4.930 - 4.954: 97.1177% ( 3) 00:20:28.030 4.954 - 4.978: 97.1733% ( 7) 00:20:28.030 4.978 - 5.001: 97.2288% ( 7) 00:20:28.030 5.001 - 5.025: 97.2447% ( 2) 00:20:28.030 5.025 - 5.049: 97.2685% ( 3) 00:20:28.030 5.049 - 5.073: 97.3162% ( 6) 00:20:28.030 5.073 - 5.096: 97.3638% ( 6) 00:20:28.030 5.096 - 5.120: 97.3956% ( 4) 00:20:28.030 5.120 - 5.144: 97.4432% ( 6) 00:20:28.030 5.144 - 5.167: 97.4512% ( 1) 00:20:28.030 5.167 - 5.191: 97.4829% ( 4) 00:20:28.030 5.191 - 5.215: 97.4988% ( 2) 00:20:28.030 5.215 - 5.239: 97.5385% ( 5) 00:20:28.030 5.262 - 5.286: 97.5703% ( 4) 00:20:28.030 5.286 - 5.310: 97.5782% ( 1) 00:20:28.030 5.333 - 5.357: 97.5941% ( 2) 00:20:28.030 5.381 - 5.404: 97.6020% ( 1) 00:20:28.030 5.404 - 5.428: 97.6100% ( 1) 00:20:28.030 5.428 - 5.452: 97.6179% ( 1) 00:20:28.030 5.476 - 5.499: 97.6259% ( 1) 00:20:28.030 5.499 - 5.523: 97.6338% ( 1) 00:20:28.030 5.594 - 5.618: 97.6497% ( 2) 00:20:28.030 5.618 - 5.641: 97.6576% ( 1) 00:20:28.030 5.689 - 5.713: 97.6656% ( 1) 00:20:28.030 5.736 - 5.760: 97.6735% ( 1) 00:20:28.030 5.784 - 5.807: 97.6814% ( 1) 00:20:28.030 5.807 - 5.831: 97.6894% ( 1) 00:20:28.030 5.902 - 5.926: 97.6973% ( 1) 00:20:28.030 6.021 - 6.044: 97.7053% ( 1) 00:20:28.030 6.044 - 6.068: 97.7132% ( 1) 00:20:28.030 6.068 - 6.116: 97.7370% ( 3) 00:20:28.030 6.210 - 6.258: 97.7450% ( 1) 00:20:28.030 6.305 - 6.353: 97.7529% ( 1) 00:20:28.030 6.353 - 6.400: 97.7608% ( 1) 00:20:28.030 6.400 - 6.447: 97.7688% ( 1) 00:20:28.030 6.447 - 6.495: 97.7926% ( 3) 00:20:28.030 6.684 - 6.732: 97.8005% ( 1) 00:20:28.030 6.732 - 6.779: 97.8085% ( 1) 00:20:28.030 6.969 - 7.016: 97.8164% ( 1) 00:20:28.030 7.064 - 7.111: 97.8244% ( 1) 00:20:28.030 7.111 - 7.159: 97.8482% ( 3) 00:20:28.030 7.159 - 7.206: 97.8561% ( 1) 00:20:28.030 7.206 - 7.253: 97.8641% ( 1) 00:20:28.030 7.301 - 7.348: 97.8879% ( 3) 00:20:28.030 7.348 - 7.396: 97.8958% ( 1) 00:20:28.030 7.538 - 7.585: 97.9038% ( 1) 00:20:28.030 7.585 - 7.633: 97.9196% ( 2) 00:20:28.030 7.633 - 7.680: 97.9276% ( 1) 00:20:28.030 7.822 - 7.870: 97.9355% ( 1) 00:20:28.030 7.917 - 7.964: 97.9514% ( 2) 00:20:28.030 8.012 - 8.059: 97.9752% ( 3) 00:20:28.030 8.059 - 8.107: 97.9832% ( 1) 00:20:28.030 8.107 - 8.154: 97.9911% ( 1) 00:20:28.031 8.154 - 8.201: 97.9990% ( 1) 00:20:28.031 8.296 - 8.344: 98.0149% ( 2) 00:20:28.031 8.344 - 8.391: 98.0308% ( 2) 00:20:28.031 8.486 - 8.533: 98.0387% ( 1) 00:20:28.031 8.581 - 8.628: 98.0546% ( 2) 00:20:28.031 8.676 - 8.723: 98.0626% ( 1) 00:20:28.031 8.770 - 8.818: 98.0705% ( 1) 00:20:28.031 8.818 - 8.865: 98.0864% ( 2) 00:20:28.031 8.865 - 8.913: 98.0943% ( 1) 00:20:28.031 8.913 - 8.960: 98.1102% ( 2) 00:20:28.031 8.960 - 9.007: 98.1340% ( 3) 00:20:28.031 9.007 - 9.055: 98.1420% ( 1) 00:20:28.031 9.055 - 9.102: 98.1499% ( 1) 00:20:28.031 9.102 - 9.150: 98.1896% ( 5) 00:20:28.031 9.150 - 9.197: 98.1976% ( 1) 00:20:28.031 9.197 - 9.244: 98.2134% ( 2) 00:20:28.031 9.244 - 9.292: 98.2214% ( 1) 00:20:28.031 9.292 - 9.339: 98.2373% ( 2) 00:20:28.031 9.339 - 9.387: 98.2611% ( 3) 00:20:28.031 9.529 - 9.576: 98.2690% ( 1) 00:20:28.031 9.719 - 9.766: 98.2849% ( 2) 00:20:28.031 9.766 - 9.813: 98.3008% ( 2) 00:20:28.031 9.813 - 9.861: 98.3087% ( 1) 00:20:28.031 9.861 - 9.908: 98.3167% ( 1) 00:20:28.031 9.908 - 9.956: 98.3405% ( 3) 00:20:28.031 9.956 - 10.003: 98.3484% ( 1) 00:20:28.031 10.003 - 10.050: 98.3643% ( 2) 00:20:28.031 10.050 - 10.098: 98.3802% ( 2) 00:20:28.031 10.098 - 10.145: 98.3881% ( 1) 00:20:28.031 10.193 - 10.240: 98.4040% ( 2) 00:20:28.031 10.240 - 10.287: 98.4119% ( 1) 00:20:28.031 10.287 - 10.335: 98.4278% ( 2) 00:20:28.031 10.335 - 10.382: 98.4516% ( 3) 00:20:28.031 10.430 - 10.477: 98.4596% ( 1) 00:20:28.031 10.524 - 10.572: 98.4675% ( 1) 00:20:28.031 10.809 - 10.856: 98.4834% ( 2) 00:20:28.031 10.856 - 10.904: 98.4993% ( 2) 00:20:28.031 10.951 - 10.999: 98.5072% ( 1) 00:20:28.031 11.046 - 11.093: 98.5152% ( 1) 00:20:28.031 11.236 - 11.283: 98.5469% ( 4) 00:20:28.031 11.283 - 11.330: 98.5549% ( 1) 00:20:28.031 11.330 - 11.378: 98.5628% ( 1) 00:20:28.031 11.425 - 11.473: 98.5707% ( 1) 00:20:28.031 11.473 - 11.520: 98.5866% ( 2) 00:20:28.031 11.615 - 11.662: 98.5946% ( 1) 00:20:28.031 11.757 - 11.804: 98.6104% ( 2) 00:20:28.031 11.804 - 11.852: 98.6263% ( 2) 00:20:28.031 11.899 - 11.947: 98.6422% ( 2) 00:20:28.031 11.947 - 11.994: 98.6502% ( 1) 00:20:28.031 12.089 - 12.136: 98.6740% ( 3) 00:20:28.031 12.136 - 12.231: 98.6899% ( 2) 00:20:28.031 12.231 - 12.326: 98.7137% ( 3) 00:20:28.031 12.326 - 12.421: 98.7296% ( 2) 00:20:28.031 12.421 - 12.516: 98.7375% ( 1) 00:20:28.031 12.516 - 12.610: 98.7454% ( 1) 00:20:28.031 12.705 - 12.800: 98.7613% ( 2) 00:20:28.031 13.084 - 13.179: 98.7693% ( 1) 00:20:28.031 13.179 - 13.274: 98.7772% ( 1) 00:20:28.031 13.274 - 13.369: 98.7851% ( 1) 00:20:28.031 13.369 - 13.464: 98.8010% ( 2) 00:20:28.031 13.464 - 13.559: 98.8090% ( 1) 00:20:28.031 13.559 - 13.653: 98.8248% ( 2) 00:20:28.031 13.748 - 13.843: 98.8407% ( 2) 00:20:28.031 13.938 - 14.033: 98.8487% ( 1) 00:20:28.031 14.033 - 14.127: 98.8566% ( 1) 00:20:28.031 14.127 - 14.222: 98.8725% ( 2) 00:20:28.031 14.222 - 14.317: 98.8804% ( 1) 00:20:28.031 14.317 - 14.412: 98.8963% ( 2) 00:20:28.031 14.412 - 14.507: 98.9042% ( 1) 00:20:28.031 14.791 - 14.886: 98.9122% ( 1) 00:20:28.031 15.076 - 15.170: 98.9360% ( 3) 00:20:28.031 15.170 - 15.265: 98.9439% ( 1) 00:20:28.031 15.265 - 15.360: 98.9519% ( 1) 00:20:28.031 15.834 - 15.929: 98.9598% ( 1) 00:20:28.031 15.929 - 16.024: 98.9678% ( 1) 00:20:28.031 17.256 - 17.351: 98.9916% ( 3) 00:20:28.031 17.351 - 17.446: 99.0154% ( 3) 00:20:28.031 17.446 - 17.541: 99.0472% ( 4) 00:20:28.031 17.541 - 17.636: 99.0710% ( 3) 00:20:28.031 17.636 - 17.730: 99.1345% ( 8) 00:20:28.031 17.730 - 17.825: 99.2060% ( 9) 00:20:28.031 17.825 - 17.920: 99.2616% ( 7) 00:20:28.031 17.920 - 18.015: 99.3410% ( 10) 00:20:28.031 18.015 - 18.110: 99.4283% ( 11) 00:20:28.031 18.110 - 18.204: 99.4759% ( 6) 00:20:28.031 18.204 - 18.299: 99.4918% ( 2) 00:20:28.031 18.299 - 18.394: 99.5315% ( 5) 00:20:28.031 18.394 - 18.489: 99.5633% ( 4) 00:20:28.031 18.489 - 18.584: 99.5871% ( 3) 00:20:28.031 18.584 - 18.679: 99.6189% ( 4) 00:20:28.031 18.679 - 18.773: 99.6983% ( 10) 00:20:28.031 18.773 - 18.868: 99.7221% ( 3) 00:20:28.031 18.868 - 18.963: 99.7618% ( 5) 00:20:28.031 18.963 - 19.058: 99.7856% ( 3) 00:20:28.031 19.058 - 19.153: 99.7936% ( 1) 00:20:28.031 19.721 - 19.816: 99.8015% ( 1) 00:20:28.031 20.006 - 20.101: 99.8094% ( 1) 00:20:28.031 20.196 - 20.290: 99.8174% ( 1) 00:20:28.031 22.661 - 22.756: 99.8253% ( 1) 00:20:28.031 22.850 - 22.945: 99.8333% ( 1) 00:20:28.031 23.419 - 23.514: 99.8412% ( 1) 00:20:28.031 24.841 - 25.031: 99.8491% ( 1) 00:20:28.031 25.600 - 25.790: 99.8571% ( 1) 00:20:28.031 25.790 - 25.979: 99.8650% ( 1) 00:20:28.031 26.548 - 26.738: 99.8730% ( 1) 00:20:28.031 3616.616 - 3640.889: 99.8809% ( 1) 00:20:28.031 3980.705 - 4004.978: 99.9841% ( 13) 00:20:28.031 4004.978 - 4029.250: 100.0000% ( 2) 00:20:28.031 00:20:28.031 Complete histogram 00:20:28.031 ================== 00:20:28.031 Range in us Cumulative Count 00:20:28.031 2.062 - 2.074: 0.0873% ( 11) 00:20:28.031 2.074 - 2.086: 17.1828% ( 2153) 00:20:28.031 2.086 - 2.098: 33.5954% ( 2067) 00:20:28.031 2.098 - 2.110: 36.0410% ( 308) 00:20:28.031 2.110 - 2.121: 54.1607% ( 2282) 00:20:28.031 2.121 - 2.133: 61.1958% ( 886) 00:20:28.031 2.133 - 2.145: 64.3084% ( 392) 00:20:28.031 2.145 - 2.157: 74.3608% ( 1266) 00:20:28.031 2.157 - 2.169: 77.7354% ( 425) 00:20:28.031 2.169 - 2.181: 80.0937% ( 297) 00:20:28.031 2.181 - 2.193: 86.7080% ( 833) 00:20:28.031 2.193 - 2.204: 88.5104% ( 227) 00:20:28.031 2.204 - 2.216: 89.3918% ( 111) 00:20:28.031 2.216 - 2.228: 90.9242% ( 193) 00:20:28.031 2.228 - 2.240: 92.5202% ( 201) 00:20:28.031 2.240 - 2.252: 93.9654% ( 182) 00:20:28.031 2.252 - 2.264: 94.8229% ( 108) 00:20:28.031 2.264 - 2.276: 95.1088% ( 36) 00:20:28.031 2.276 - 2.287: 95.2596% ( 19) 00:20:28.031 2.287 - 2.299: 95.5058% ( 31) 00:20:28.031 2.299 - 2.311: 95.7361% ( 29) 00:20:28.031 2.311 - 2.323: 95.9743% ( 30) 00:20:28.031 2.323 - 2.335: 96.0696% ( 12) 00:20:28.031 2.335 - 2.347: 96.1013% ( 4) 00:20:28.031 2.347 - 2.359: 96.1331% ( 4) 00:20:28.032 2.359 - 2.370: 96.1807% ( 6) 00:20:28.032 2.370 - 2.382: 96.3316% ( 19) 00:20:28.032 2.382 - 2.394: 96.5777% ( 31) 00:20:28.032 2.394 - 2.406: 96.8953% ( 40) 00:20:28.032 2.406 - 2.418: 97.0383% ( 18) 00:20:28.032 2.418 - 2.430: 97.3003% ( 33) 00:20:28.032 2.430 - 2.441: 97.5385% ( 30) 00:20:28.032 2.441 - 2.453: 97.6894% ( 19) 00:20:28.032 2.453 - 2.465: 97.8720% ( 23) 00:20:28.032 2.465 - 2.477: 9[2024-12-09 10:31:12.562376] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:28.032 8.0308% ( 20) 00:20:28.032 2.477 - 2.489: 98.1420% ( 14) 00:20:28.032 2.489 - 2.501: 98.2214% ( 10) 00:20:28.032 2.501 - 2.513: 98.3167% ( 12) 00:20:28.032 2.513 - 2.524: 98.3325% ( 2) 00:20:28.032 2.524 - 2.536: 98.3722% ( 5) 00:20:28.032 2.536 - 2.548: 98.3881% ( 2) 00:20:28.032 2.548 - 2.560: 98.4040% ( 2) 00:20:28.032 2.560 - 2.572: 98.4278% ( 3) 00:20:28.032 2.572 - 2.584: 98.4358% ( 1) 00:20:28.032 2.584 - 2.596: 98.4437% ( 1) 00:20:28.032 2.607 - 2.619: 98.4516% ( 1) 00:20:28.032 2.667 - 2.679: 98.4596% ( 1) 00:20:28.032 2.809 - 2.821: 98.4675% ( 1) 00:20:28.032 3.153 - 3.176: 98.4755% ( 1) 00:20:28.032 3.295 - 3.319: 98.4834% ( 1) 00:20:28.032 3.342 - 3.366: 98.4913% ( 1) 00:20:28.032 3.366 - 3.390: 98.4993% ( 1) 00:20:28.032 3.390 - 3.413: 98.5072% ( 1) 00:20:28.032 3.413 - 3.437: 98.5231% ( 2) 00:20:28.032 3.437 - 3.461: 98.5549% ( 4) 00:20:28.032 3.461 - 3.484: 98.5628% ( 1) 00:20:28.032 3.484 - 3.508: 98.5787% ( 2) 00:20:28.032 3.508 - 3.532: 98.5946% ( 2) 00:20:28.032 3.532 - 3.556: 98.6025% ( 1) 00:20:28.032 3.579 - 3.603: 98.6184% ( 2) 00:20:28.032 3.627 - 3.650: 98.6263% ( 1) 00:20:28.032 3.650 - 3.674: 98.6343% ( 1) 00:20:28.032 3.698 - 3.721: 98.6502% ( 2) 00:20:28.032 3.721 - 3.745: 98.6581% ( 1) 00:20:28.032 3.769 - 3.793: 98.6660% ( 1) 00:20:28.032 3.911 - 3.935: 98.6978% ( 4) 00:20:28.032 4.006 - 4.030: 98.7057% ( 1) 00:20:28.032 4.124 - 4.148: 98.7137% ( 1) 00:20:28.032 4.219 - 4.243: 98.7216% ( 1) 00:20:28.032 4.338 - 4.361: 98.7296% ( 1) 00:20:28.032 6.068 - 6.116: 98.7375% ( 1) 00:20:28.032 6.163 - 6.210: 98.7454% ( 1) 00:20:28.032 6.353 - 6.400: 98.7534% ( 1) 00:20:28.032 6.732 - 6.779: 98.7693% ( 2) 00:20:28.032 7.064 - 7.111: 98.7772% ( 1) 00:20:28.032 7.585 - 7.633: 98.7851% ( 1) 00:20:28.032 7.870 - 7.917: 98.7931% ( 1) 00:20:28.032 8.012 - 8.059: 98.8010% ( 1) 00:20:28.032 8.059 - 8.107: 98.8090% ( 1) 00:20:28.032 8.107 - 8.154: 98.8169% ( 1) 00:20:28.032 8.249 - 8.296: 98.8248% ( 1) 00:20:28.032 8.439 - 8.486: 98.8328% ( 1) 00:20:28.032 8.581 - 8.628: 98.8407% ( 1) 00:20:28.032 8.676 - 8.723: 98.8487% ( 1) 00:20:28.032 9.481 - 9.529: 98.8566% ( 1) 00:20:28.032 10.477 - 10.524: 98.8645% ( 1) 00:20:28.032 13.274 - 13.369: 98.8725% ( 1) 00:20:28.032 14.696 - 14.791: 98.8804% ( 1) 00:20:28.032 15.644 - 15.739: 98.8884% ( 1) 00:20:28.032 15.739 - 15.834: 98.9201% ( 4) 00:20:28.032 15.834 - 15.929: 98.9439% ( 3) 00:20:28.032 15.929 - 16.024: 98.9519% ( 1) 00:20:28.032 16.024 - 16.119: 98.9995% ( 6) 00:20:28.032 16.119 - 16.213: 99.0233% ( 3) 00:20:28.032 16.213 - 16.308: 99.0789% ( 7) 00:20:28.032 16.308 - 16.403: 99.1027% ( 3) 00:20:28.032 16.403 - 16.498: 99.1266% ( 3) 00:20:28.032 16.498 - 16.593: 99.1583% ( 4) 00:20:28.032 16.593 - 16.687: 99.1822% ( 3) 00:20:28.032 16.687 - 16.782: 99.2536% ( 9) 00:20:28.032 16.782 - 16.877: 99.3171% ( 8) 00:20:28.032 16.877 - 16.972: 99.3410% ( 3) 00:20:28.032 17.161 - 17.256: 99.3489% ( 1) 00:20:28.032 17.256 - 17.351: 99.3568% ( 1) 00:20:28.032 17.351 - 17.446: 99.3648% ( 1) 00:20:28.032 17.446 - 17.541: 99.3727% ( 1) 00:20:28.032 17.541 - 17.636: 99.3807% ( 1) 00:20:28.032 17.730 - 17.825: 99.3886% ( 1) 00:20:28.032 18.015 - 18.110: 99.3965% ( 1) 00:20:28.032 18.489 - 18.584: 99.4045% ( 1) 00:20:28.032 18.773 - 18.868: 99.4124% ( 1) 00:20:28.032 21.428 - 21.523: 99.4204% ( 1) 00:20:28.032 26.169 - 26.359: 99.4283% ( 1) 00:20:28.032 3980.705 - 4004.978: 99.9047% ( 60) 00:20:28.032 4004.978 - 4029.250: 99.9841% ( 10) 00:20:28.032 4029.250 - 4053.523: 99.9921% ( 1) 00:20:28.032 4053.523 - 4077.796: 100.0000% ( 1) 00:20:28.032 00:20:28.032 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:28.032 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:28.032 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:28.032 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:28.032 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:28.600 [ 00:20:28.600 { 00:20:28.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.600 "subtype": "Discovery", 00:20:28.600 "listen_addresses": [], 00:20:28.600 "allow_any_host": true, 00:20:28.600 "hosts": [] 00:20:28.600 }, 00:20:28.600 { 00:20:28.600 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:28.600 "subtype": "NVMe", 00:20:28.600 "listen_addresses": [ 00:20:28.600 { 00:20:28.600 "trtype": "VFIOUSER", 00:20:28.600 "adrfam": "IPv4", 00:20:28.600 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:28.600 "trsvcid": "0" 00:20:28.600 } 00:20:28.600 ], 00:20:28.600 "allow_any_host": true, 00:20:28.600 "hosts": [], 00:20:28.600 "serial_number": "SPDK1", 00:20:28.600 "model_number": "SPDK bdev Controller", 00:20:28.600 "max_namespaces": 32, 00:20:28.600 "min_cntlid": 1, 00:20:28.600 "max_cntlid": 65519, 00:20:28.600 "namespaces": [ 00:20:28.600 { 00:20:28.600 "nsid": 1, 00:20:28.600 "bdev_name": "Malloc1", 00:20:28.600 "name": "Malloc1", 00:20:28.600 "nguid": "093DE36957D74FDFBAF8AEA19712C601", 00:20:28.600 "uuid": "093de369-57d7-4fdf-baf8-aea19712c601" 00:20:28.600 } 00:20:28.600 ] 00:20:28.600 }, 00:20:28.600 { 00:20:28.600 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:28.600 "subtype": "NVMe", 00:20:28.600 "listen_addresses": [ 00:20:28.600 { 00:20:28.600 "trtype": "VFIOUSER", 00:20:28.600 "adrfam": "IPv4", 00:20:28.600 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:28.600 "trsvcid": "0" 00:20:28.600 } 00:20:28.600 ], 00:20:28.600 "allow_any_host": true, 00:20:28.600 "hosts": [], 00:20:28.600 "serial_number": "SPDK2", 00:20:28.600 "model_number": "SPDK bdev Controller", 00:20:28.600 "max_namespaces": 32, 00:20:28.600 "min_cntlid": 1, 00:20:28.600 "max_cntlid": 65519, 00:20:28.600 "namespaces": [ 00:20:28.600 { 00:20:28.600 "nsid": 1, 00:20:28.600 "bdev_name": "Malloc2", 00:20:28.600 "name": "Malloc2", 00:20:28.600 "nguid": "ADB46AFA5265445CB79CA3925F0669E0", 00:20:28.600 "uuid": "adb46afa-5265-445c-b79c-a3925f0669e0" 00:20:28.600 } 00:20:28.600 ] 00:20:28.600 } 00:20:28.600 ] 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2076865 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:28.600 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:28.600 [2024-12-09 10:31:13.188239] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:28.858 Malloc3 00:20:28.858 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:29.115 [2024-12-09 10:31:13.677929] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:29.115 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:29.115 Asynchronous Event Request test 00:20:29.115 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:29.115 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:29.115 Registering asynchronous event callbacks... 00:20:29.115 Starting namespace attribute notice tests for all controllers... 00:20:29.115 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:29.115 aer_cb - Changed Namespace 00:20:29.115 Cleaning up... 00:20:29.684 [ 00:20:29.684 { 00:20:29.684 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:29.684 "subtype": "Discovery", 00:20:29.684 "listen_addresses": [], 00:20:29.684 "allow_any_host": true, 00:20:29.684 "hosts": [] 00:20:29.684 }, 00:20:29.684 { 00:20:29.684 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:29.684 "subtype": "NVMe", 00:20:29.685 "listen_addresses": [ 00:20:29.685 { 00:20:29.685 "trtype": "VFIOUSER", 00:20:29.685 "adrfam": "IPv4", 00:20:29.685 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:29.685 "trsvcid": "0" 00:20:29.685 } 00:20:29.685 ], 00:20:29.685 "allow_any_host": true, 00:20:29.685 "hosts": [], 00:20:29.685 "serial_number": "SPDK1", 00:20:29.685 "model_number": "SPDK bdev Controller", 00:20:29.685 "max_namespaces": 32, 00:20:29.685 "min_cntlid": 1, 00:20:29.685 "max_cntlid": 65519, 00:20:29.685 "namespaces": [ 00:20:29.685 { 00:20:29.685 "nsid": 1, 00:20:29.685 "bdev_name": "Malloc1", 00:20:29.685 "name": "Malloc1", 00:20:29.685 "nguid": "093DE36957D74FDFBAF8AEA19712C601", 00:20:29.685 "uuid": "093de369-57d7-4fdf-baf8-aea19712c601" 00:20:29.685 }, 00:20:29.685 { 00:20:29.685 "nsid": 2, 00:20:29.685 "bdev_name": "Malloc3", 00:20:29.685 "name": "Malloc3", 00:20:29.685 "nguid": "8A4F62831D124C048F932C380D5AFE53", 00:20:29.685 "uuid": "8a4f6283-1d12-4c04-8f93-2c380d5afe53" 00:20:29.685 } 00:20:29.685 ] 00:20:29.685 }, 00:20:29.685 { 00:20:29.685 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:29.685 "subtype": "NVMe", 00:20:29.685 "listen_addresses": [ 00:20:29.685 { 00:20:29.685 "trtype": "VFIOUSER", 00:20:29.685 "adrfam": "IPv4", 00:20:29.685 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:29.685 "trsvcid": "0" 00:20:29.685 } 00:20:29.685 ], 00:20:29.685 "allow_any_host": true, 00:20:29.685 "hosts": [], 00:20:29.685 "serial_number": "SPDK2", 00:20:29.685 "model_number": "SPDK bdev Controller", 00:20:29.685 "max_namespaces": 32, 00:20:29.685 "min_cntlid": 1, 00:20:29.685 "max_cntlid": 65519, 00:20:29.685 "namespaces": [ 00:20:29.685 { 00:20:29.685 "nsid": 1, 00:20:29.685 "bdev_name": "Malloc2", 00:20:29.685 "name": "Malloc2", 00:20:29.685 "nguid": "ADB46AFA5265445CB79CA3925F0669E0", 00:20:29.685 "uuid": "adb46afa-5265-445c-b79c-a3925f0669e0" 00:20:29.685 } 00:20:29.685 ] 00:20:29.685 } 00:20:29.685 ] 00:20:29.685 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2076865 00:20:29.685 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:29.685 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:29.685 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:29.685 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:29.685 [2024-12-09 10:31:14.109401] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:20:29.685 [2024-12-09 10:31:14.109453] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077002 ] 00:20:29.685 [2024-12-09 10:31:14.164537] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:29.685 [2024-12-09 10:31:14.170042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:29.685 [2024-12-09 10:31:14.170076] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe198613000 00:20:29.685 [2024-12-09 10:31:14.171047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.172055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.173079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.174071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.175076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.176083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.177093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.178096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:29.685 [2024-12-09 10:31:14.179111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:29.685 [2024-12-09 10:31:14.179132] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe198608000 00:20:29.685 [2024-12-09 10:31:14.180247] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:29.685 [2024-12-09 10:31:14.194321] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:29.685 [2024-12-09 10:31:14.194358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:20:29.685 [2024-12-09 10:31:14.199472] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:29.685 [2024-12-09 10:31:14.199528] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:29.685 [2024-12-09 10:31:14.199619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:20:29.685 [2024-12-09 10:31:14.199642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:20:29.685 [2024-12-09 10:31:14.199653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:20:29.685 [2024-12-09 10:31:14.200480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:29.685 [2024-12-09 10:31:14.200505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:20:29.685 [2024-12-09 10:31:14.200519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:20:29.685 [2024-12-09 10:31:14.201488] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:29.685 [2024-12-09 10:31:14.201509] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:20:29.685 [2024-12-09 10:31:14.201523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.202494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:29.685 [2024-12-09 10:31:14.202515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.203503] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:29.685 [2024-12-09 10:31:14.203524] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:29.685 [2024-12-09 10:31:14.203537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.203549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.203660] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:20:29.685 [2024-12-09 10:31:14.203668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.203677] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:20:29.685 [2024-12-09 10:31:14.204523] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:20:29.685 [2024-12-09 10:31:14.205518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:29.685 [2024-12-09 10:31:14.206528] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:29.685 [2024-12-09 10:31:14.207517] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:29.685 [2024-12-09 10:31:14.207588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:29.685 [2024-12-09 10:31:14.208538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:29.685 [2024-12-09 10:31:14.208559] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:29.685 [2024-12-09 10:31:14.208568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.208591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:20:29.685 [2024-12-09 10:31:14.208608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.208635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:29.685 [2024-12-09 10:31:14.208644] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:29.685 [2024-12-09 10:31:14.208651] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.685 [2024-12-09 10:31:14.208670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.216740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.216769] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:20:29.685 [2024-12-09 10:31:14.216780] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:20:29.685 [2024-12-09 10:31:14.216787] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:20:29.685 [2024-12-09 10:31:14.216796] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:29.685 [2024-12-09 10:31:14.216804] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:20:29.685 [2024-12-09 10:31:14.216812] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:20:29.685 [2024-12-09 10:31:14.216823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.216836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.216852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.224744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.224768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.685 [2024-12-09 10:31:14.224781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.685 [2024-12-09 10:31:14.224793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.685 [2024-12-09 10:31:14.224804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.685 [2024-12-09 10:31:14.224813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.224830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.224846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.232731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.232751] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:20:29.685 [2024-12-09 10:31:14.232760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.232772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.232783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.232796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.240733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.240810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.240828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.240841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:29.685 [2024-12-09 10:31:14.240849] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:29.685 [2024-12-09 10:31:14.240855] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.685 [2024-12-09 10:31:14.240865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.248758] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:20:29.685 [2024-12-09 10:31:14.248780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.248795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.248808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:29.685 [2024-12-09 10:31:14.248816] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:29.685 [2024-12-09 10:31:14.248822] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.685 [2024-12-09 10:31:14.248831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.256748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.256780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.256798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.256812] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:29.685 [2024-12-09 10:31:14.256820] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:29.685 [2024-12-09 10:31:14.256826] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.685 [2024-12-09 10:31:14.256835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.264730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.264752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264823] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:29.685 [2024-12-09 10:31:14.264830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:20:29.685 [2024-12-09 10:31:14.264839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:20:29.685 [2024-12-09 10:31:14.264864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:29.685 [2024-12-09 10:31:14.272736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:29.685 [2024-12-09 10:31:14.272761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.280734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.280758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.288733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.288758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.296730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.296763] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:29.686 [2024-12-09 10:31:14.296774] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:29.686 [2024-12-09 10:31:14.296780] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:29.686 [2024-12-09 10:31:14.296786] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:29.686 [2024-12-09 10:31:14.296792] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:29.686 [2024-12-09 10:31:14.296802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:29.686 [2024-12-09 10:31:14.296814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:29.686 [2024-12-09 10:31:14.296822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:29.686 [2024-12-09 10:31:14.296828] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.686 [2024-12-09 10:31:14.296837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.296848] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:29.686 [2024-12-09 10:31:14.296856] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:29.686 [2024-12-09 10:31:14.296861] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.686 [2024-12-09 10:31:14.296870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.296882] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:29.686 [2024-12-09 10:31:14.296890] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:29.686 [2024-12-09 10:31:14.296896] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:29.686 [2024-12-09 10:31:14.296905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.304735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.304764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.304781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.304798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:29.686 ===================================================== 00:20:29.686 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:29.686 ===================================================== 00:20:29.686 Controller Capabilities/Features 00:20:29.686 ================================ 00:20:29.686 Vendor ID: 4e58 00:20:29.686 Subsystem Vendor ID: 4e58 00:20:29.686 Serial Number: SPDK2 00:20:29.686 Model Number: SPDK bdev Controller 00:20:29.686 Firmware Version: 25.01 00:20:29.686 Recommended Arb Burst: 6 00:20:29.686 IEEE OUI Identifier: 8d 6b 50 00:20:29.686 Multi-path I/O 00:20:29.686 May have multiple subsystem ports: Yes 00:20:29.686 May have multiple controllers: Yes 00:20:29.686 Associated with SR-IOV VF: No 00:20:29.686 Max Data Transfer Size: 131072 00:20:29.686 Max Number of Namespaces: 32 00:20:29.686 Max Number of I/O Queues: 127 00:20:29.686 NVMe Specification Version (VS): 1.3 00:20:29.686 NVMe Specification Version (Identify): 1.3 00:20:29.686 Maximum Queue Entries: 256 00:20:29.686 Contiguous Queues Required: Yes 00:20:29.686 Arbitration Mechanisms Supported 00:20:29.686 Weighted Round Robin: Not Supported 00:20:29.686 Vendor Specific: Not Supported 00:20:29.686 Reset Timeout: 15000 ms 00:20:29.686 Doorbell Stride: 4 bytes 00:20:29.686 NVM Subsystem Reset: Not Supported 00:20:29.686 Command Sets Supported 00:20:29.686 NVM Command Set: Supported 00:20:29.686 Boot Partition: Not Supported 00:20:29.686 Memory Page Size Minimum: 4096 bytes 00:20:29.686 Memory Page Size Maximum: 4096 bytes 00:20:29.686 Persistent Memory Region: Not Supported 00:20:29.686 Optional Asynchronous Events Supported 00:20:29.686 Namespace Attribute Notices: Supported 00:20:29.686 Firmware Activation Notices: Not Supported 00:20:29.686 ANA Change Notices: Not Supported 00:20:29.686 PLE Aggregate Log Change Notices: Not Supported 00:20:29.686 LBA Status Info Alert Notices: Not Supported 00:20:29.686 EGE Aggregate Log Change Notices: Not Supported 00:20:29.686 Normal NVM Subsystem Shutdown event: Not Supported 00:20:29.686 Zone Descriptor Change Notices: Not Supported 00:20:29.686 Discovery Log Change Notices: Not Supported 00:20:29.686 Controller Attributes 00:20:29.686 128-bit Host Identifier: Supported 00:20:29.686 Non-Operational Permissive Mode: Not Supported 00:20:29.686 NVM Sets: Not Supported 00:20:29.686 Read Recovery Levels: Not Supported 00:20:29.686 Endurance Groups: Not Supported 00:20:29.686 Predictable Latency Mode: Not Supported 00:20:29.686 Traffic Based Keep ALive: Not Supported 00:20:29.686 Namespace Granularity: Not Supported 00:20:29.686 SQ Associations: Not Supported 00:20:29.686 UUID List: Not Supported 00:20:29.686 Multi-Domain Subsystem: Not Supported 00:20:29.686 Fixed Capacity Management: Not Supported 00:20:29.686 Variable Capacity Management: Not Supported 00:20:29.686 Delete Endurance Group: Not Supported 00:20:29.686 Delete NVM Set: Not Supported 00:20:29.686 Extended LBA Formats Supported: Not Supported 00:20:29.686 Flexible Data Placement Supported: Not Supported 00:20:29.686 00:20:29.686 Controller Memory Buffer Support 00:20:29.686 ================================ 00:20:29.686 Supported: No 00:20:29.686 00:20:29.686 Persistent Memory Region Support 00:20:29.686 ================================ 00:20:29.686 Supported: No 00:20:29.686 00:20:29.686 Admin Command Set Attributes 00:20:29.686 ============================ 00:20:29.686 Security Send/Receive: Not Supported 00:20:29.686 Format NVM: Not Supported 00:20:29.686 Firmware Activate/Download: Not Supported 00:20:29.686 Namespace Management: Not Supported 00:20:29.686 Device Self-Test: Not Supported 00:20:29.686 Directives: Not Supported 00:20:29.686 NVMe-MI: Not Supported 00:20:29.686 Virtualization Management: Not Supported 00:20:29.686 Doorbell Buffer Config: Not Supported 00:20:29.686 Get LBA Status Capability: Not Supported 00:20:29.686 Command & Feature Lockdown Capability: Not Supported 00:20:29.686 Abort Command Limit: 4 00:20:29.686 Async Event Request Limit: 4 00:20:29.686 Number of Firmware Slots: N/A 00:20:29.686 Firmware Slot 1 Read-Only: N/A 00:20:29.686 Firmware Activation Without Reset: N/A 00:20:29.686 Multiple Update Detection Support: N/A 00:20:29.686 Firmware Update Granularity: No Information Provided 00:20:29.686 Per-Namespace SMART Log: No 00:20:29.686 Asymmetric Namespace Access Log Page: Not Supported 00:20:29.686 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:29.686 Command Effects Log Page: Supported 00:20:29.686 Get Log Page Extended Data: Supported 00:20:29.686 Telemetry Log Pages: Not Supported 00:20:29.686 Persistent Event Log Pages: Not Supported 00:20:29.686 Supported Log Pages Log Page: May Support 00:20:29.686 Commands Supported & Effects Log Page: Not Supported 00:20:29.686 Feature Identifiers & Effects Log Page:May Support 00:20:29.686 NVMe-MI Commands & Effects Log Page: May Support 00:20:29.686 Data Area 4 for Telemetry Log: Not Supported 00:20:29.686 Error Log Page Entries Supported: 128 00:20:29.686 Keep Alive: Supported 00:20:29.686 Keep Alive Granularity: 10000 ms 00:20:29.686 00:20:29.686 NVM Command Set Attributes 00:20:29.686 ========================== 00:20:29.686 Submission Queue Entry Size 00:20:29.686 Max: 64 00:20:29.686 Min: 64 00:20:29.686 Completion Queue Entry Size 00:20:29.686 Max: 16 00:20:29.686 Min: 16 00:20:29.686 Number of Namespaces: 32 00:20:29.686 Compare Command: Supported 00:20:29.686 Write Uncorrectable Command: Not Supported 00:20:29.686 Dataset Management Command: Supported 00:20:29.686 Write Zeroes Command: Supported 00:20:29.686 Set Features Save Field: Not Supported 00:20:29.686 Reservations: Not Supported 00:20:29.686 Timestamp: Not Supported 00:20:29.686 Copy: Supported 00:20:29.686 Volatile Write Cache: Present 00:20:29.686 Atomic Write Unit (Normal): 1 00:20:29.686 Atomic Write Unit (PFail): 1 00:20:29.686 Atomic Compare & Write Unit: 1 00:20:29.686 Fused Compare & Write: Supported 00:20:29.686 Scatter-Gather List 00:20:29.686 SGL Command Set: Supported (Dword aligned) 00:20:29.686 SGL Keyed: Not Supported 00:20:29.686 SGL Bit Bucket Descriptor: Not Supported 00:20:29.686 SGL Metadata Pointer: Not Supported 00:20:29.686 Oversized SGL: Not Supported 00:20:29.686 SGL Metadata Address: Not Supported 00:20:29.686 SGL Offset: Not Supported 00:20:29.686 Transport SGL Data Block: Not Supported 00:20:29.686 Replay Protected Memory Block: Not Supported 00:20:29.686 00:20:29.686 Firmware Slot Information 00:20:29.686 ========================= 00:20:29.686 Active slot: 1 00:20:29.686 Slot 1 Firmware Revision: 25.01 00:20:29.686 00:20:29.686 00:20:29.686 Commands Supported and Effects 00:20:29.686 ============================== 00:20:29.686 Admin Commands 00:20:29.686 -------------- 00:20:29.686 Get Log Page (02h): Supported 00:20:29.686 Identify (06h): Supported 00:20:29.686 Abort (08h): Supported 00:20:29.686 Set Features (09h): Supported 00:20:29.686 Get Features (0Ah): Supported 00:20:29.686 Asynchronous Event Request (0Ch): Supported 00:20:29.686 Keep Alive (18h): Supported 00:20:29.686 I/O Commands 00:20:29.686 ------------ 00:20:29.686 Flush (00h): Supported LBA-Change 00:20:29.686 Write (01h): Supported LBA-Change 00:20:29.686 Read (02h): Supported 00:20:29.686 Compare (05h): Supported 00:20:29.686 Write Zeroes (08h): Supported LBA-Change 00:20:29.686 Dataset Management (09h): Supported LBA-Change 00:20:29.686 Copy (19h): Supported LBA-Change 00:20:29.686 00:20:29.686 Error Log 00:20:29.686 ========= 00:20:29.686 00:20:29.686 Arbitration 00:20:29.686 =========== 00:20:29.686 Arbitration Burst: 1 00:20:29.686 00:20:29.686 Power Management 00:20:29.686 ================ 00:20:29.686 Number of Power States: 1 00:20:29.686 Current Power State: Power State #0 00:20:29.686 Power State #0: 00:20:29.686 Max Power: 0.00 W 00:20:29.686 Non-Operational State: Operational 00:20:29.686 Entry Latency: Not Reported 00:20:29.686 Exit Latency: Not Reported 00:20:29.686 Relative Read Throughput: 0 00:20:29.686 Relative Read Latency: 0 00:20:29.686 Relative Write Throughput: 0 00:20:29.686 Relative Write Latency: 0 00:20:29.686 Idle Power: Not Reported 00:20:29.686 Active Power: Not Reported 00:20:29.686 Non-Operational Permissive Mode: Not Supported 00:20:29.686 00:20:29.686 Health Information 00:20:29.686 ================== 00:20:29.686 Critical Warnings: 00:20:29.686 Available Spare Space: OK 00:20:29.686 Temperature: OK 00:20:29.686 Device Reliability: OK 00:20:29.686 Read Only: No 00:20:29.686 Volatile Memory Backup: OK 00:20:29.686 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:29.686 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:29.686 Available Spare: 0% 00:20:29.686 Available Sp[2024-12-09 10:31:14.304916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:29.686 [2024-12-09 10:31:14.312730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.312780] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:20:29.686 [2024-12-09 10:31:14.312798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.312809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.312819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.312828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.686 [2024-12-09 10:31:14.312913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:29.686 [2024-12-09 10:31:14.312935] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:29.686 [2024-12-09 10:31:14.313915] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:29.686 [2024-12-09 10:31:14.313987] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:20:29.686 [2024-12-09 10:31:14.314023] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:20:29.686 [2024-12-09 10:31:14.314925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:29.686 [2024-12-09 10:31:14.314949] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:20:29.686 [2024-12-09 10:31:14.315001] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:29.686 [2024-12-09 10:31:14.316201] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:29.945 are Threshold: 0% 00:20:29.945 Life Percentage Used: 0% 00:20:29.945 Data Units Read: 0 00:20:29.945 Data Units Written: 0 00:20:29.945 Host Read Commands: 0 00:20:29.945 Host Write Commands: 0 00:20:29.945 Controller Busy Time: 0 minutes 00:20:29.945 Power Cycles: 0 00:20:29.945 Power On Hours: 0 hours 00:20:29.945 Unsafe Shutdowns: 0 00:20:29.945 Unrecoverable Media Errors: 0 00:20:29.945 Lifetime Error Log Entries: 0 00:20:29.945 Warning Temperature Time: 0 minutes 00:20:29.945 Critical Temperature Time: 0 minutes 00:20:29.945 00:20:29.945 Number of Queues 00:20:29.945 ================ 00:20:29.945 Number of I/O Submission Queues: 127 00:20:29.945 Number of I/O Completion Queues: 127 00:20:29.945 00:20:29.945 Active Namespaces 00:20:29.945 ================= 00:20:29.945 Namespace ID:1 00:20:29.945 Error Recovery Timeout: Unlimited 00:20:29.945 Command Set Identifier: NVM (00h) 00:20:29.945 Deallocate: Supported 00:20:29.945 Deallocated/Unwritten Error: Not Supported 00:20:29.945 Deallocated Read Value: Unknown 00:20:29.945 Deallocate in Write Zeroes: Not Supported 00:20:29.945 Deallocated Guard Field: 0xFFFF 00:20:29.945 Flush: Supported 00:20:29.945 Reservation: Supported 00:20:29.945 Namespace Sharing Capabilities: Multiple Controllers 00:20:29.945 Size (in LBAs): 131072 (0GiB) 00:20:29.945 Capacity (in LBAs): 131072 (0GiB) 00:20:29.945 Utilization (in LBAs): 131072 (0GiB) 00:20:29.945 NGUID: ADB46AFA5265445CB79CA3925F0669E0 00:20:29.945 UUID: adb46afa-5265-445c-b79c-a3925f0669e0 00:20:29.945 Thin Provisioning: Not Supported 00:20:29.945 Per-NS Atomic Units: Yes 00:20:29.945 Atomic Boundary Size (Normal): 0 00:20:29.945 Atomic Boundary Size (PFail): 0 00:20:29.945 Atomic Boundary Offset: 0 00:20:29.945 Maximum Single Source Range Length: 65535 00:20:29.945 Maximum Copy Length: 65535 00:20:29.945 Maximum Source Range Count: 1 00:20:29.945 NGUID/EUI64 Never Reused: No 00:20:29.945 Namespace Write Protected: No 00:20:29.945 Number of LBA Formats: 1 00:20:29.945 Current LBA Format: LBA Format #00 00:20:29.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:29.945 00:20:29.945 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:29.945 [2024-12-09 10:31:14.578612] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:35.315 Initializing NVMe Controllers 00:20:35.315 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:35.315 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:35.315 Initialization complete. Launching workers. 00:20:35.315 ======================================================== 00:20:35.315 Latency(us) 00:20:35.315 Device Information : IOPS MiB/s Average min max 00:20:35.315 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31733.95 123.96 4032.76 1212.78 8987.37 00:20:35.315 ======================================================== 00:20:35.315 Total : 31733.95 123.96 4032.76 1212.78 8987.37 00:20:35.315 00:20:35.315 [2024-12-09 10:31:19.684089] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:35.315 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:35.315 [2024-12-09 10:31:19.939795] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:40.597 Initializing NVMe Controllers 00:20:40.597 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:40.597 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:40.597 Initialization complete. Launching workers. 00:20:40.597 ======================================================== 00:20:40.597 Latency(us) 00:20:40.597 Device Information : IOPS MiB/s Average min max 00:20:40.597 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30157.21 117.80 4243.78 1253.64 7788.84 00:20:40.597 ======================================================== 00:20:40.597 Total : 30157.21 117.80 4243.78 1253.64 7788.84 00:20:40.597 00:20:40.597 [2024-12-09 10:31:24.965098] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:40.597 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:40.597 [2024-12-09 10:31:25.195619] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:45.876 [2024-12-09 10:31:30.339861] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:45.876 Initializing NVMe Controllers 00:20:45.876 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:45.876 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:45.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:45.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:45.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:45.876 Initialization complete. Launching workers. 00:20:45.876 Starting thread on core 2 00:20:45.876 Starting thread on core 3 00:20:45.876 Starting thread on core 1 00:20:45.876 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:46.134 [2024-12-09 10:31:30.658798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:49.417 [2024-12-09 10:31:33.724098] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:49.417 Initializing NVMe Controllers 00:20:49.417 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.417 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:49.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:49.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:49.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:49.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:49.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:49.417 Initialization complete. Launching workers. 00:20:49.417 Starting thread on core 1 with urgent priority queue 00:20:49.417 Starting thread on core 2 with urgent priority queue 00:20:49.417 Starting thread on core 3 with urgent priority queue 00:20:49.417 Starting thread on core 0 with urgent priority queue 00:20:49.417 SPDK bdev Controller (SPDK2 ) core 0: 5300.00 IO/s 18.87 secs/100000 ios 00:20:49.417 SPDK bdev Controller (SPDK2 ) core 1: 6426.00 IO/s 15.56 secs/100000 ios 00:20:49.417 SPDK bdev Controller (SPDK2 ) core 2: 6550.67 IO/s 15.27 secs/100000 ios 00:20:49.417 SPDK bdev Controller (SPDK2 ) core 3: 4931.67 IO/s 20.28 secs/100000 ios 00:20:49.417 ======================================================== 00:20:49.417 00:20:49.417 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:49.417 [2024-12-09 10:31:34.045265] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:49.417 Initializing NVMe Controllers 00:20:49.417 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.417 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:49.417 Namespace ID: 1 size: 0GB 00:20:49.417 Initialization complete. 00:20:49.417 INFO: using host memory buffer for IO 00:20:49.417 Hello world! 00:20:49.417 [2024-12-09 10:31:34.055329] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:49.677 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:49.936 [2024-12-09 10:31:34.425487] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:50.875 Initializing NVMe Controllers 00:20:50.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:50.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:50.875 Initialization complete. Launching workers. 00:20:50.875 submit (in ns) avg, min, max = 7746.9, 3500.0, 4003624.4 00:20:50.875 complete (in ns) avg, min, max = 27733.9, 2066.7, 4016554.4 00:20:50.875 00:20:50.875 Submit histogram 00:20:50.875 ================ 00:20:50.875 Range in us Cumulative Count 00:20:50.875 3.484 - 3.508: 0.0080% ( 1) 00:20:50.875 3.508 - 3.532: 0.3667% ( 45) 00:20:50.875 3.532 - 3.556: 1.5702% ( 151) 00:20:50.875 3.556 - 3.579: 4.4875% ( 366) 00:20:50.875 3.579 - 3.603: 9.3416% ( 609) 00:20:50.875 3.603 - 3.627: 18.6275% ( 1165) 00:20:50.875 3.627 - 3.650: 28.4473% ( 1232) 00:20:50.875 3.650 - 3.674: 38.2433% ( 1229) 00:20:50.875 3.674 - 3.698: 45.3690% ( 894) 00:20:50.875 3.698 - 3.721: 53.2759% ( 992) 00:20:50.875 3.721 - 3.745: 59.2779% ( 753) 00:20:50.875 3.745 - 3.769: 64.5146% ( 657) 00:20:50.875 3.769 - 3.793: 68.2927% ( 474) 00:20:50.875 3.793 - 3.816: 71.4969% ( 402) 00:20:50.875 3.816 - 3.840: 74.7170% ( 404) 00:20:50.875 3.840 - 3.864: 78.0249% ( 415) 00:20:50.875 3.864 - 3.887: 81.2211% ( 401) 00:20:50.875 3.887 - 3.911: 84.5130% ( 413) 00:20:50.875 3.911 - 3.935: 87.1274% ( 328) 00:20:50.875 3.935 - 3.959: 89.0164% ( 237) 00:20:50.875 3.959 - 3.982: 90.4751% ( 183) 00:20:50.875 3.982 - 4.006: 92.1090% ( 205) 00:20:50.875 4.006 - 4.030: 93.1771% ( 134) 00:20:50.875 4.030 - 4.053: 94.1416% ( 121) 00:20:50.875 4.053 - 4.077: 94.8908% ( 94) 00:20:50.875 4.077 - 4.101: 95.5444% ( 82) 00:20:50.875 4.101 - 4.124: 95.9828% ( 55) 00:20:50.875 4.124 - 4.148: 96.2139% ( 29) 00:20:50.875 4.148 - 4.172: 96.3893% ( 22) 00:20:50.875 4.172 - 4.196: 96.5088% ( 15) 00:20:50.875 4.196 - 4.219: 96.6284% ( 15) 00:20:50.875 4.219 - 4.243: 96.7878% ( 20) 00:20:50.875 4.243 - 4.267: 96.8197% ( 4) 00:20:50.875 4.267 - 4.290: 96.9632% ( 18) 00:20:50.875 4.290 - 4.314: 97.0190% ( 7) 00:20:50.875 4.314 - 4.338: 97.0907% ( 9) 00:20:50.875 4.338 - 4.361: 97.1864% ( 12) 00:20:50.875 4.361 - 4.385: 97.2182% ( 4) 00:20:50.875 4.385 - 4.409: 97.2421% ( 3) 00:20:50.875 4.409 - 4.433: 97.2581% ( 2) 00:20:50.875 4.433 - 4.456: 97.2661% ( 1) 00:20:50.875 4.456 - 4.480: 97.2740% ( 1) 00:20:50.875 4.504 - 4.527: 97.2900% ( 2) 00:20:50.875 4.551 - 4.575: 97.2979% ( 1) 00:20:50.875 4.599 - 4.622: 97.3139% ( 2) 00:20:50.875 4.622 - 4.646: 97.3378% ( 3) 00:20:50.875 4.670 - 4.693: 97.3537% ( 2) 00:20:50.875 4.693 - 4.717: 97.3856% ( 4) 00:20:50.875 4.717 - 4.741: 97.4414% ( 7) 00:20:50.875 4.741 - 4.764: 97.4653% ( 3) 00:20:50.875 4.764 - 4.788: 97.4972% ( 4) 00:20:50.875 4.788 - 4.812: 97.5291% ( 4) 00:20:50.875 4.812 - 4.836: 97.5689% ( 5) 00:20:50.875 4.836 - 4.859: 97.5929% ( 3) 00:20:50.875 4.859 - 4.883: 97.7044% ( 14) 00:20:50.875 4.883 - 4.907: 97.7762% ( 9) 00:20:50.875 4.907 - 4.930: 97.8160% ( 5) 00:20:50.875 4.930 - 4.954: 97.8479% ( 4) 00:20:50.875 4.954 - 4.978: 97.9037% ( 7) 00:20:50.875 4.978 - 5.001: 97.9436% ( 5) 00:20:50.875 5.001 - 5.025: 97.9994% ( 7) 00:20:50.875 5.025 - 5.049: 98.0392% ( 5) 00:20:50.875 5.049 - 5.073: 98.0711% ( 4) 00:20:50.875 5.073 - 5.096: 98.1269% ( 7) 00:20:50.875 5.096 - 5.120: 98.1508% ( 3) 00:20:50.875 5.144 - 5.167: 98.1827% ( 4) 00:20:50.875 5.167 - 5.191: 98.2066% ( 3) 00:20:50.875 5.191 - 5.215: 98.2385% ( 4) 00:20:50.875 5.215 - 5.239: 98.2465% ( 1) 00:20:50.875 5.262 - 5.286: 98.2544% ( 1) 00:20:50.875 5.286 - 5.310: 98.2783% ( 3) 00:20:50.875 5.310 - 5.333: 98.2863% ( 1) 00:20:50.876 5.333 - 5.357: 98.3022% ( 2) 00:20:50.876 5.357 - 5.381: 98.3182% ( 2) 00:20:50.876 5.404 - 5.428: 98.3262% ( 1) 00:20:50.876 5.452 - 5.476: 98.3341% ( 1) 00:20:50.876 5.476 - 5.499: 98.3421% ( 1) 00:20:50.876 5.499 - 5.523: 98.3501% ( 1) 00:20:50.876 5.570 - 5.594: 98.3580% ( 1) 00:20:50.876 5.594 - 5.618: 98.3660% ( 1) 00:20:50.876 5.689 - 5.713: 98.3740% ( 1) 00:20:50.876 5.879 - 5.902: 98.3820% ( 1) 00:20:50.876 5.926 - 5.950: 98.3899% ( 1) 00:20:50.876 5.950 - 5.973: 98.3979% ( 1) 00:20:50.876 5.997 - 6.021: 98.4059% ( 1) 00:20:50.876 6.068 - 6.116: 98.4218% ( 2) 00:20:50.876 6.163 - 6.210: 98.4377% ( 2) 00:20:50.876 6.210 - 6.258: 98.4537% ( 2) 00:20:50.876 6.400 - 6.447: 98.4617% ( 1) 00:20:50.876 6.447 - 6.495: 98.4696% ( 1) 00:20:50.876 6.542 - 6.590: 98.4776% ( 1) 00:20:50.876 6.779 - 6.827: 98.4856% ( 1) 00:20:50.876 6.921 - 6.969: 98.4935% ( 1) 00:20:50.876 7.111 - 7.159: 98.5015% ( 1) 00:20:50.876 7.159 - 7.206: 98.5095% ( 1) 00:20:50.876 7.206 - 7.253: 98.5175% ( 1) 00:20:50.876 7.253 - 7.301: 98.5254% ( 1) 00:20:50.876 7.870 - 7.917: 98.5334% ( 1) 00:20:50.876 8.012 - 8.059: 98.5414% ( 1) 00:20:50.876 8.059 - 8.107: 98.5493% ( 1) 00:20:50.876 8.107 - 8.154: 98.5733% ( 3) 00:20:50.876 8.201 - 8.249: 98.5812% ( 1) 00:20:50.876 8.249 - 8.296: 98.5892% ( 1) 00:20:50.876 8.296 - 8.344: 98.5972% ( 1) 00:20:50.876 8.676 - 8.723: 98.6131% ( 2) 00:20:50.876 8.818 - 8.865: 98.6211% ( 1) 00:20:50.876 8.865 - 8.913: 98.6290% ( 1) 00:20:50.876 9.007 - 9.055: 98.6370% ( 1) 00:20:50.876 9.055 - 9.102: 98.6609% ( 3) 00:20:50.876 9.102 - 9.150: 98.6769% ( 2) 00:20:50.876 9.150 - 9.197: 98.6928% ( 2) 00:20:50.876 9.292 - 9.339: 98.7167% ( 3) 00:20:50.876 9.339 - 9.387: 98.7247% ( 1) 00:20:50.876 9.387 - 9.434: 98.7327% ( 1) 00:20:50.876 9.434 - 9.481: 98.7406% ( 1) 00:20:50.876 9.481 - 9.529: 98.7486% ( 1) 00:20:50.876 9.529 - 9.576: 98.7566% ( 1) 00:20:50.876 9.576 - 9.624: 98.7645% ( 1) 00:20:50.876 9.624 - 9.671: 98.7885% ( 3) 00:20:50.876 9.719 - 9.766: 98.7964% ( 1) 00:20:50.876 9.813 - 9.861: 98.8044% ( 1) 00:20:50.876 10.098 - 10.145: 98.8124% ( 1) 00:20:50.876 10.240 - 10.287: 98.8203% ( 1) 00:20:50.876 10.335 - 10.382: 98.8283% ( 1) 00:20:50.876 10.382 - 10.430: 98.8363% ( 1) 00:20:50.876 10.524 - 10.572: 98.8443% ( 1) 00:20:50.876 10.667 - 10.714: 98.8522% ( 1) 00:20:50.876 10.761 - 10.809: 98.8602% ( 1) 00:20:50.876 10.999 - 11.046: 98.8682% ( 1) 00:20:50.876 11.093 - 11.141: 98.8761% ( 1) 00:20:50.876 11.236 - 11.283: 98.8841% ( 1) 00:20:50.876 11.473 - 11.520: 98.8921% ( 1) 00:20:50.876 11.804 - 11.852: 98.9000% ( 1) 00:20:50.876 11.852 - 11.899: 98.9080% ( 1) 00:20:50.876 11.994 - 12.041: 98.9160% ( 1) 00:20:50.876 12.136 - 12.231: 98.9240% ( 1) 00:20:50.876 12.421 - 12.516: 98.9319% ( 1) 00:20:50.876 12.610 - 12.705: 98.9479% ( 2) 00:20:50.876 12.705 - 12.800: 98.9558% ( 1) 00:20:50.876 12.800 - 12.895: 98.9638% ( 1) 00:20:50.876 12.895 - 12.990: 98.9718% ( 1) 00:20:50.876 13.274 - 13.369: 98.9798% ( 1) 00:20:50.876 13.369 - 13.464: 98.9877% ( 1) 00:20:50.876 13.464 - 13.559: 98.9957% ( 1) 00:20:50.876 13.748 - 13.843: 99.0037% ( 1) 00:20:50.876 13.843 - 13.938: 99.0116% ( 1) 00:20:50.876 14.033 - 14.127: 99.0196% ( 1) 00:20:50.876 14.222 - 14.317: 99.0435% ( 3) 00:20:50.876 14.886 - 14.981: 99.0515% ( 1) 00:20:50.876 15.644 - 15.739: 99.0595% ( 1) 00:20:50.876 16.024 - 16.119: 99.0674% ( 1) 00:20:50.876 16.972 - 17.067: 99.0754% ( 1) 00:20:50.876 17.067 - 17.161: 99.0913% ( 2) 00:20:50.876 17.161 - 17.256: 99.0993% ( 1) 00:20:50.876 17.256 - 17.351: 99.1153% ( 2) 00:20:50.876 17.351 - 17.446: 99.1232% ( 1) 00:20:50.876 17.446 - 17.541: 99.1392% ( 2) 00:20:50.876 17.541 - 17.636: 99.1631% ( 3) 00:20:50.876 17.636 - 17.730: 99.2109% ( 6) 00:20:50.876 17.730 - 17.825: 99.2508% ( 5) 00:20:50.876 17.825 - 17.920: 99.3225% ( 9) 00:20:50.876 17.920 - 18.015: 99.4102% ( 11) 00:20:50.876 18.015 - 18.110: 99.4421% ( 4) 00:20:50.876 18.110 - 18.204: 99.5218% ( 10) 00:20:50.876 18.204 - 18.299: 99.5855% ( 8) 00:20:50.876 18.299 - 18.394: 99.6174% ( 4) 00:20:50.876 18.394 - 18.489: 99.6732% ( 7) 00:20:50.876 18.489 - 18.584: 99.7210% ( 6) 00:20:50.876 18.584 - 18.679: 99.7449% ( 3) 00:20:50.876 18.679 - 18.773: 99.7689% ( 3) 00:20:50.876 18.773 - 18.868: 99.7848% ( 2) 00:20:50.876 18.868 - 18.963: 99.8406% ( 7) 00:20:50.876 19.058 - 19.153: 99.8565% ( 2) 00:20:50.876 19.247 - 19.342: 99.8645% ( 1) 00:20:50.876 23.609 - 23.704: 99.8725% ( 1) 00:20:50.876 24.462 - 24.652: 99.8804% ( 1) 00:20:50.876 24.841 - 25.031: 99.8884% ( 1) 00:20:50.876 25.031 - 25.221: 99.8964% ( 1) 00:20:50.876 26.927 - 27.117: 99.9044% ( 1) 00:20:50.876 3980.705 - 4004.978: 100.0000% ( 12) 00:20:50.876 00:20:50.876 Complete histogram 00:20:50.876 ================== 00:20:50.876 Range in us Cumulative Count 00:20:50.876 2.062 - 2.074: 1.1398% ( 143) 00:20:50.876 2.074 - 2.086: 30.6711% ( 3705) 00:20:50.876 2.086 - 2.098: 37.7889% ( 893) 00:20:50.876 2.098 - 2.110: 43.3684% ( 700) 00:20:50.876 2.110 - 2.121: 60.0670% ( 2095) 00:20:50.876 2.121 - 2.133: 63.2074% ( 394) 00:20:50.876 2.133 - 2.145: 67.8702% ( 585) 00:20:50.876 2.145 - 2.157: 78.0328% ( 1275) 00:20:50.876 2.157 - 2.169: 79.2842% ( 157) 00:20:50.876 2.169 - 2.181: 83.3572% ( 511) 00:20:50.876 2.181 - 2.193: 88.6657% ( 666) 00:20:50.876 2.193 - 2.204: 89.5744% ( 114) 00:20:50.876 2.204 - 2.216: 90.6504% ( 135) 00:20:50.876 2.216 - 2.228: 92.0532% ( 176) 00:20:50.876 2.228 - 2.240: 93.8546% ( 226) 00:20:50.876 2.240 - 2.252: 94.6357% ( 98) 00:20:50.876 2.252 - 2.264: 94.9386% ( 38) 00:20:50.876 2.264 - 2.276: 95.0901% ( 19) 00:20:50.876 2.276 - 2.287: 95.1857% ( 12) 00:20:50.876 2.287 - 2.299: 95.4408% ( 32) 00:20:50.876 2.299 - 2.311: 95.6161% ( 22) 00:20:50.876 2.311 - 2.323: 95.6958% ( 10) 00:20:50.876 2.323 - 2.335: 95.7118% ( 2) 00:20:50.876 2.335 - 2.347: 95.7437% ( 4) 00:20:50.876 2.347 - 2.359: 95.7835% ( 5) 00:20:50.876 2.359 - 2.370: 95.8792% ( 12) 00:20:50.876 2.370 - 2.382: 96.0226% ( 18) 00:20:50.876 2.382 - 2.394: 96.1980% ( 22) 00:20:50.876 2.394 - 2.406: 96.4531% ( 32) 00:20:50.876 2.406 - 2.418: 96.7001% ( 31) 00:20:50.876 2.418 - 2.430: 97.0349% ( 42) 00:20:50.876 2.430 - 2.441: 97.3378% ( 38) 00:20:50.876 2.441 - 2.453: 97.5132% ( 22) 00:20:50.876 2.453 - 2.465: 97.7044% ( 24) 00:20:50.876 2.465 - 2.477: 97.8320% ( 16) 00:20:50.876 2.477 - 2.489: 97.9515% ( 15) 00:20:50.876 2.489 - 2.501: 98.0552% ( 13) 00:20:50.876 2.501 - 2.513: 98.0950% ( 5) 00:20:50.876 2.513 - 2.524: 98.1508% ( 7) 00:20:50.876 2.524 - 2.536: 98.1907% ( 5) 00:20:50.876 2.536 - 2.548: 98.1986% ( 1) 00:20:50.876 2.548 - 2.560: 98.2225% ( 3) 00:20:50.876 2.560 - 2.572: 98.2544% ( 4) 00:20:50.876 2.572 - 2.584: 98.2704% ( 2) 00:20:50.876 2.596 - 2.607: 98.2783% ( 1) 00:20:50.876 2.607 - 2.619: 98.2863% ( 1) 00:20:50.876 2.619 - 2.631: 98.2943% ( 1) 00:20:50.876 2.643 - 2.655: 98.3022% ( 1) 00:20:50.876 2.690 - 2.702: 98.3182% ( 2) 00:20:50.876 2.702 - 2.714: 98.3421% ( 3) 00:20:50.876 2.714 - 2.726: 98.3501% ( 1) 00:20:50.876 2.726 - 2.738: 98.3580% ( 1) 00:20:50.876 2.773 - 2.785: 98.3660% ( 1) 00:20:50.876 2.785 - 2.797: 98.3740% ( 1) 00:20:50.876 2.809 - 2.821: 98.3820% ( 1) 00:20:50.876 2.880 - 2.892: 98.3899% ( 1) 00:20:50.876 2.927 - 2.939: 98.3979% ( 1) 00:20:50.876 2.939 - 2.951: 98.4138% ( 2) 00:20:50.876 3.034 - 3.058: 98.4218% ( 1) 00:20:50.876 3.058 - 3.081: 98.4377% ( 2) 00:20:50.876 3.366 - 3.390: 98.4457% ( 1) 00:20:50.876 3.413 - 3.437: 98.4537% ( 1) 00:20:50.876 3.461 - 3.484: 98.4696% ( 2) 00:20:50.876 3.484 - 3.508: 98.5175% ( 6) 00:20:50.876 3.508 - 3.532: 98.5414% ( 3) 00:20:50.876 3.532 - 3.556: 98.5573% ( 2) 00:20:50.876 3.556 - 3.579: 98.5812% ( 3) 00:20:50.876 3.721 - 3.745: 98.5892% ( 1) 00:20:50.876 3.745 - 3.769: 98.5972% ( 1) 00:20:50.876 3.769 - 3.793: 98.6131% ( 2) 00:20:50.876 3.864 - 3.887: 98.6211% ( 1) 00:20:50.876 3.911 - 3.935: 9[2024-12-09 10:31:35.520540] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:51.135 8.6370% ( 2) 00:20:51.135 3.959 - 3.982: 98.6450% ( 1) 00:20:51.135 4.006 - 4.030: 98.6530% ( 1) 00:20:51.135 4.077 - 4.101: 98.6609% ( 1) 00:20:51.135 4.124 - 4.148: 98.6689% ( 1) 00:20:51.135 4.290 - 4.314: 98.6769% ( 1) 00:20:51.135 4.361 - 4.385: 98.6848% ( 1) 00:20:51.135 4.575 - 4.599: 98.6928% ( 1) 00:20:51.135 4.670 - 4.693: 98.7008% ( 1) 00:20:51.135 5.689 - 5.713: 98.7088% ( 1) 00:20:51.135 6.068 - 6.116: 98.7167% ( 1) 00:20:51.135 6.163 - 6.210: 98.7247% ( 1) 00:20:51.135 6.258 - 6.305: 98.7327% ( 1) 00:20:51.135 6.353 - 6.400: 98.7406% ( 1) 00:20:51.135 6.400 - 6.447: 98.7486% ( 1) 00:20:51.135 6.495 - 6.542: 98.7566% ( 1) 00:20:51.135 6.542 - 6.590: 98.7645% ( 1) 00:20:51.135 6.590 - 6.637: 98.7725% ( 1) 00:20:51.135 7.159 - 7.206: 98.7885% ( 2) 00:20:51.135 7.206 - 7.253: 98.7964% ( 1) 00:20:51.135 8.818 - 8.865: 98.8044% ( 1) 00:20:51.135 8.960 - 9.007: 98.8124% ( 1) 00:20:51.135 10.477 - 10.524: 98.8203% ( 1) 00:20:51.135 15.455 - 15.550: 98.8283% ( 1) 00:20:51.135 15.644 - 15.739: 98.8602% ( 4) 00:20:51.135 15.739 - 15.834: 98.8841% ( 3) 00:20:51.135 15.834 - 15.929: 98.9240% ( 5) 00:20:51.135 15.929 - 16.024: 98.9718% ( 6) 00:20:51.135 16.024 - 16.119: 98.9877% ( 2) 00:20:51.135 16.119 - 16.213: 99.0196% ( 4) 00:20:51.135 16.213 - 16.308: 99.0754% ( 7) 00:20:51.135 16.308 - 16.403: 99.0913% ( 2) 00:20:51.135 16.403 - 16.498: 99.1153% ( 3) 00:20:51.135 16.498 - 16.593: 99.1232% ( 1) 00:20:51.135 16.593 - 16.687: 99.1312% ( 1) 00:20:51.135 16.687 - 16.782: 99.1790% ( 6) 00:20:51.135 16.782 - 16.877: 99.2348% ( 7) 00:20:51.135 16.877 - 16.972: 99.2508% ( 2) 00:20:51.135 16.972 - 17.067: 99.2587% ( 1) 00:20:51.135 17.067 - 17.161: 99.2667% ( 1) 00:20:51.135 17.161 - 17.256: 99.2906% ( 3) 00:20:51.135 17.256 - 17.351: 99.2986% ( 1) 00:20:51.135 17.351 - 17.446: 99.3066% ( 1) 00:20:51.135 17.825 - 17.920: 99.3145% ( 1) 00:20:51.135 18.110 - 18.204: 99.3305% ( 2) 00:20:51.135 18.679 - 18.773: 99.3384% ( 1) 00:20:51.135 22.281 - 22.376: 99.3464% ( 1) 00:20:51.135 28.065 - 28.255: 99.3544% ( 1) 00:20:51.135 143.360 - 144.119: 99.3623% ( 1) 00:20:51.135 3883.615 - 3907.887: 99.3703% ( 1) 00:20:51.135 3980.705 - 4004.978: 99.8884% ( 65) 00:20:51.135 4004.978 - 4029.250: 100.0000% ( 14) 00:20:51.135 00:20:51.135 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:51.135 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:51.135 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:51.135 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:51.135 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:51.704 [ 00:20:51.704 { 00:20:51.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:51.704 "subtype": "Discovery", 00:20:51.704 "listen_addresses": [], 00:20:51.704 "allow_any_host": true, 00:20:51.704 "hosts": [] 00:20:51.704 }, 00:20:51.704 { 00:20:51.704 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:51.704 "subtype": "NVMe", 00:20:51.704 "listen_addresses": [ 00:20:51.704 { 00:20:51.704 "trtype": "VFIOUSER", 00:20:51.704 "adrfam": "IPv4", 00:20:51.704 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:51.704 "trsvcid": "0" 00:20:51.704 } 00:20:51.704 ], 00:20:51.704 "allow_any_host": true, 00:20:51.704 "hosts": [], 00:20:51.704 "serial_number": "SPDK1", 00:20:51.704 "model_number": "SPDK bdev Controller", 00:20:51.704 "max_namespaces": 32, 00:20:51.704 "min_cntlid": 1, 00:20:51.704 "max_cntlid": 65519, 00:20:51.704 "namespaces": [ 00:20:51.704 { 00:20:51.704 "nsid": 1, 00:20:51.704 "bdev_name": "Malloc1", 00:20:51.704 "name": "Malloc1", 00:20:51.704 "nguid": "093DE36957D74FDFBAF8AEA19712C601", 00:20:51.704 "uuid": "093de369-57d7-4fdf-baf8-aea19712c601" 00:20:51.704 }, 00:20:51.704 { 00:20:51.704 "nsid": 2, 00:20:51.704 "bdev_name": "Malloc3", 00:20:51.704 "name": "Malloc3", 00:20:51.704 "nguid": "8A4F62831D124C048F932C380D5AFE53", 00:20:51.704 "uuid": "8a4f6283-1d12-4c04-8f93-2c380d5afe53" 00:20:51.704 } 00:20:51.704 ] 00:20:51.704 }, 00:20:51.704 { 00:20:51.704 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:51.704 "subtype": "NVMe", 00:20:51.704 "listen_addresses": [ 00:20:51.704 { 00:20:51.704 "trtype": "VFIOUSER", 00:20:51.704 "adrfam": "IPv4", 00:20:51.704 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:51.704 "trsvcid": "0" 00:20:51.704 } 00:20:51.704 ], 00:20:51.704 "allow_any_host": true, 00:20:51.704 "hosts": [], 00:20:51.704 "serial_number": "SPDK2", 00:20:51.704 "model_number": "SPDK bdev Controller", 00:20:51.704 "max_namespaces": 32, 00:20:51.704 "min_cntlid": 1, 00:20:51.704 "max_cntlid": 65519, 00:20:51.704 "namespaces": [ 00:20:51.704 { 00:20:51.704 "nsid": 1, 00:20:51.704 "bdev_name": "Malloc2", 00:20:51.704 "name": "Malloc2", 00:20:51.704 "nguid": "ADB46AFA5265445CB79CA3925F0669E0", 00:20:51.704 "uuid": "adb46afa-5265-445c-b79c-a3925f0669e0" 00:20:51.704 } 00:20:51.704 ] 00:20:51.704 } 00:20:51.704 ] 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2079514 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:51.704 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:51.963 [2024-12-09 10:31:36.395512] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:51.963 Malloc4 00:20:51.963 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:52.533 [2024-12-09 10:31:36.926634] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:52.533 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:52.533 Asynchronous Event Request test 00:20:52.533 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:52.533 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:52.533 Registering asynchronous event callbacks... 00:20:52.533 Starting namespace attribute notice tests for all controllers... 00:20:52.533 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:52.533 aer_cb - Changed Namespace 00:20:52.533 Cleaning up... 00:20:52.794 [ 00:20:52.794 { 00:20:52.794 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:52.794 "subtype": "Discovery", 00:20:52.794 "listen_addresses": [], 00:20:52.794 "allow_any_host": true, 00:20:52.795 "hosts": [] 00:20:52.795 }, 00:20:52.795 { 00:20:52.795 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:52.795 "subtype": "NVMe", 00:20:52.795 "listen_addresses": [ 00:20:52.795 { 00:20:52.795 "trtype": "VFIOUSER", 00:20:52.795 "adrfam": "IPv4", 00:20:52.795 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:52.795 "trsvcid": "0" 00:20:52.795 } 00:20:52.795 ], 00:20:52.795 "allow_any_host": true, 00:20:52.795 "hosts": [], 00:20:52.795 "serial_number": "SPDK1", 00:20:52.795 "model_number": "SPDK bdev Controller", 00:20:52.795 "max_namespaces": 32, 00:20:52.795 "min_cntlid": 1, 00:20:52.795 "max_cntlid": 65519, 00:20:52.795 "namespaces": [ 00:20:52.795 { 00:20:52.795 "nsid": 1, 00:20:52.795 "bdev_name": "Malloc1", 00:20:52.795 "name": "Malloc1", 00:20:52.795 "nguid": "093DE36957D74FDFBAF8AEA19712C601", 00:20:52.795 "uuid": "093de369-57d7-4fdf-baf8-aea19712c601" 00:20:52.795 }, 00:20:52.795 { 00:20:52.795 "nsid": 2, 00:20:52.795 "bdev_name": "Malloc3", 00:20:52.795 "name": "Malloc3", 00:20:52.795 "nguid": "8A4F62831D124C048F932C380D5AFE53", 00:20:52.795 "uuid": "8a4f6283-1d12-4c04-8f93-2c380d5afe53" 00:20:52.795 } 00:20:52.795 ] 00:20:52.795 }, 00:20:52.795 { 00:20:52.795 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:52.795 "subtype": "NVMe", 00:20:52.795 "listen_addresses": [ 00:20:52.795 { 00:20:52.795 "trtype": "VFIOUSER", 00:20:52.795 "adrfam": "IPv4", 00:20:52.795 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:52.795 "trsvcid": "0" 00:20:52.795 } 00:20:52.795 ], 00:20:52.795 "allow_any_host": true, 00:20:52.795 "hosts": [], 00:20:52.795 "serial_number": "SPDK2", 00:20:52.795 "model_number": "SPDK bdev Controller", 00:20:52.795 "max_namespaces": 32, 00:20:52.795 "min_cntlid": 1, 00:20:52.795 "max_cntlid": 65519, 00:20:52.795 "namespaces": [ 00:20:52.795 { 00:20:52.795 "nsid": 1, 00:20:52.795 "bdev_name": "Malloc2", 00:20:52.795 "name": "Malloc2", 00:20:52.795 "nguid": "ADB46AFA5265445CB79CA3925F0669E0", 00:20:52.795 "uuid": "adb46afa-5265-445c-b79c-a3925f0669e0" 00:20:52.795 }, 00:20:52.795 { 00:20:52.795 "nsid": 2, 00:20:52.795 "bdev_name": "Malloc4", 00:20:52.795 "name": "Malloc4", 00:20:52.795 "nguid": "4BBB348FC3D1499FB04B876D36846D39", 00:20:52.795 "uuid": "4bbb348f-c3d1-499f-b04b-876d36846d39" 00:20:52.795 } 00:20:52.795 ] 00:20:52.795 } 00:20:52.795 ] 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2079514 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2073668 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2073668 ']' 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2073668 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073668 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073668' 00:20:52.795 killing process with pid 2073668 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2073668 00:20:52.795 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2073668 00:20:53.366 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:53.366 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:53.366 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2079784 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2079784' 00:20:53.367 Process pid: 2079784 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2079784 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2079784 ']' 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.367 10:31:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:53.367 [2024-12-09 10:31:37.906453] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:53.367 [2024-12-09 10:31:37.907982] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:20:53.367 [2024-12-09 10:31:37.908062] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.627 [2024-12-09 10:31:38.036667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.627 [2024-12-09 10:31:38.158079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.627 [2024-12-09 10:31:38.158193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.627 [2024-12-09 10:31:38.158230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.627 [2024-12-09 10:31:38.158269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.627 [2024-12-09 10:31:38.158294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.627 [2024-12-09 10:31:38.161899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.627 [2024-12-09 10:31:38.162002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.627 [2024-12-09 10:31:38.162107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.627 [2024-12-09 10:31:38.162111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.888 [2024-12-09 10:31:38.338707] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:53.888 [2024-12-09 10:31:38.339241] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:53.888 [2024-12-09 10:31:38.339552] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:53.888 [2024-12-09 10:31:38.340544] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:53.888 [2024-12-09 10:31:38.340939] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:53.888 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.888 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:53.888 10:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:54.826 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:55.393 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:55.393 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:55.393 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:55.393 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:55.393 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:55.652 Malloc1 00:20:55.910 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:56.168 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:56.735 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:56.993 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:56.993 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:56.993 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:57.559 Malloc2 00:20:57.559 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:57.817 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:58.382 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2079784 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2079784 ']' 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2079784 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:58.967 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079784 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079784' 00:20:58.968 killing process with pid 2079784 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2079784 00:20:58.968 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2079784 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:59.537 00:20:59.537 real 0m59.337s 00:20:59.537 user 3m49.486s 00:20:59.537 sys 0m5.362s 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:59.537 ************************************ 00:20:59.537 END TEST nvmf_vfio_user 00:20:59.537 ************************************ 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.537 ************************************ 00:20:59.537 START TEST nvmf_vfio_user_nvme_compliance 00:20:59.537 ************************************ 00:20:59.537 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:59.537 * Looking for test storage... 00:20:59.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:59.537 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.537 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.537 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.797 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.798 --rc genhtml_branch_coverage=1 00:20:59.798 --rc genhtml_function_coverage=1 00:20:59.798 --rc genhtml_legend=1 00:20:59.798 --rc geninfo_all_blocks=1 00:20:59.798 --rc geninfo_unexecuted_blocks=1 00:20:59.798 00:20:59.798 ' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.798 --rc genhtml_branch_coverage=1 00:20:59.798 --rc genhtml_function_coverage=1 00:20:59.798 --rc genhtml_legend=1 00:20:59.798 --rc geninfo_all_blocks=1 00:20:59.798 --rc geninfo_unexecuted_blocks=1 00:20:59.798 00:20:59.798 ' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.798 --rc genhtml_branch_coverage=1 00:20:59.798 --rc genhtml_function_coverage=1 00:20:59.798 --rc genhtml_legend=1 00:20:59.798 --rc geninfo_all_blocks=1 00:20:59.798 --rc geninfo_unexecuted_blocks=1 00:20:59.798 00:20:59.798 ' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.798 --rc genhtml_branch_coverage=1 00:20:59.798 --rc genhtml_function_coverage=1 00:20:59.798 --rc genhtml_legend=1 00:20:59.798 --rc geninfo_all_blocks=1 00:20:59.798 --rc geninfo_unexecuted_blocks=1 00:20:59.798 00:20:59.798 ' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.798 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2080526 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2080526' 00:20:59.799 Process pid: 2080526 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2080526 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2080526 ']' 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.799 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:59.799 [2024-12-09 10:31:44.406647] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:20:59.799 [2024-12-09 10:31:44.406850] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.058 [2024-12-09 10:31:44.567104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.058 [2024-12-09 10:31:44.691628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.058 [2024-12-09 10:31:44.691755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.058 [2024-12-09 10:31:44.691797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.058 [2024-12-09 10:31:44.691826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.058 [2024-12-09 10:31:44.691850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.058 [2024-12-09 10:31:44.695203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.058 [2024-12-09 10:31:44.695304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.058 [2024-12-09 10:31:44.695314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.318 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.318 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:21:00.318 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.256 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:01.516 malloc0 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:01.516 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.517 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:01.517 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.517 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:21:01.776 00:21:01.776 00:21:01.776 CUnit - A unit testing framework for C - Version 2.1-3 00:21:01.776 http://cunit.sourceforge.net/ 00:21:01.776 00:21:01.776 00:21:01.776 Suite: nvme_compliance 00:21:01.776 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 10:31:46.275783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:01.776 [2024-12-09 10:31:46.277743] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:21:01.776 [2024-12-09 10:31:46.277817] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:21:01.776 [2024-12-09 10:31:46.277833] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:21:01.776 [2024-12-09 10:31:46.279904] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:01.776 passed 00:21:01.776 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 10:31:46.412360] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:01.776 [2024-12-09 10:31:46.418437] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.035 passed 00:21:02.035 Test: admin_identify_ns ...[2024-12-09 10:31:46.557307] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:02.035 [2024-12-09 10:31:46.616784] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:21:02.035 [2024-12-09 10:31:46.624777] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:21:02.035 [2024-12-09 10:31:46.645922] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.295 passed 00:21:02.295 Test: admin_get_features_mandatory_features ...[2024-12-09 10:31:46.780910] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:02.295 [2024-12-09 10:31:46.783934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.295 passed 00:21:02.295 Test: admin_get_features_optional_features ...[2024-12-09 10:31:46.910170] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:02.295 [2024-12-09 10:31:46.914222] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.556 passed 00:21:02.556 Test: admin_set_features_number_of_queues ...[2024-12-09 10:31:47.047923] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:02.556 [2024-12-09 10:31:47.150868] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.816 passed 00:21:02.816 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 10:31:47.283894] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:02.816 [2024-12-09 10:31:47.286923] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:02.816 passed 00:21:02.816 Test: admin_get_log_page_with_lpo ...[2024-12-09 10:31:47.419193] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.077 [2024-12-09 10:31:47.490777] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:21:03.077 [2024-12-09 10:31:47.503883] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.077 passed 00:21:03.077 Test: fabric_property_get ...[2024-12-09 10:31:47.637861] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.077 [2024-12-09 10:31:47.639444] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:21:03.077 [2024-12-09 10:31:47.640894] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.077 passed 00:21:03.335 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 10:31:47.774248] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.335 [2024-12-09 10:31:47.775941] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:21:03.335 [2024-12-09 10:31:47.777310] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.335 passed 00:21:03.335 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 10:31:47.910351] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.593 [2024-12-09 10:31:47.993770] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:03.593 [2024-12-09 10:31:48.009749] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:03.593 [2024-12-09 10:31:48.014917] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.593 passed 00:21:03.593 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 10:31:48.150782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.593 [2024-12-09 10:31:48.152274] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:21:03.593 [2024-12-09 10:31:48.153804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.593 passed 00:21:03.852 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 10:31:48.287814] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:03.852 [2024-12-09 10:31:48.365756] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:03.852 [2024-12-09 10:31:48.389750] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:21:03.852 [2024-12-09 10:31:48.394897] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:03.852 passed 00:21:04.112 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 10:31:48.526868] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:04.112 [2024-12-09 10:31:48.528488] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:21:04.112 [2024-12-09 10:31:48.528601] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:21:04.112 [2024-12-09 10:31:48.529896] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:04.112 passed 00:21:04.112 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 10:31:48.665331] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:04.112 [2024-12-09 10:31:48.756743] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:21:04.112 [2024-12-09 10:31:48.764787] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:21:04.372 [2024-12-09 10:31:48.772746] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:21:04.372 [2024-12-09 10:31:48.780798] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:21:04.372 [2024-12-09 10:31:48.809874] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:04.372 passed 00:21:04.372 Test: admin_create_io_sq_verify_pc ...[2024-12-09 10:31:48.941835] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:04.372 [2024-12-09 10:31:48.959779] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:21:04.372 [2024-12-09 10:31:48.977432] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:04.629 passed 00:21:04.629 Test: admin_create_io_qp_max_qps ...[2024-12-09 10:31:49.109799] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:05.564 [2024-12-09 10:31:50.211776] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:21:06.131 [2024-12-09 10:31:50.595197] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:06.131 passed 00:21:06.131 Test: admin_create_io_sq_shared_cq ...[2024-12-09 10:31:50.729477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:21:06.390 [2024-12-09 10:31:50.860770] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:21:06.390 [2024-12-09 10:31:50.897879] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:21:06.390 passed 00:21:06.390 00:21:06.390 Run Summary: Type Total Ran Passed Failed Inactive 00:21:06.390 suites 1 1 n/a 0 0 00:21:06.390 tests 18 18 18 0 0 00:21:06.390 asserts 360 360 360 0 n/a 00:21:06.390 00:21:06.390 Elapsed time = 2.015 seconds 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2080526 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2080526 ']' 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2080526 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.390 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2080526 00:21:06.650 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.650 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.650 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2080526' 00:21:06.650 killing process with pid 2080526 00:21:06.650 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2080526 00:21:06.650 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2080526 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:06.909 00:21:06.909 real 0m7.483s 00:21:06.909 user 0m20.311s 00:21:06.909 sys 0m0.873s 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:21:06.909 ************************************ 00:21:06.909 END TEST nvmf_vfio_user_nvme_compliance 00:21:06.909 ************************************ 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.909 ************************************ 00:21:06.909 START TEST nvmf_vfio_user_fuzz 00:21:06.909 ************************************ 00:21:06.909 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:21:07.168 * Looking for test storage... 00:21:07.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.168 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:07.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.168 --rc genhtml_branch_coverage=1 00:21:07.168 --rc genhtml_function_coverage=1 00:21:07.168 --rc genhtml_legend=1 00:21:07.168 --rc geninfo_all_blocks=1 00:21:07.169 --rc geninfo_unexecuted_blocks=1 00:21:07.169 00:21:07.169 ' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.169 --rc genhtml_branch_coverage=1 00:21:07.169 --rc genhtml_function_coverage=1 00:21:07.169 --rc genhtml_legend=1 00:21:07.169 --rc geninfo_all_blocks=1 00:21:07.169 --rc geninfo_unexecuted_blocks=1 00:21:07.169 00:21:07.169 ' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.169 --rc genhtml_branch_coverage=1 00:21:07.169 --rc genhtml_function_coverage=1 00:21:07.169 --rc genhtml_legend=1 00:21:07.169 --rc geninfo_all_blocks=1 00:21:07.169 --rc geninfo_unexecuted_blocks=1 00:21:07.169 00:21:07.169 ' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.169 --rc genhtml_branch_coverage=1 00:21:07.169 --rc genhtml_function_coverage=1 00:21:07.169 --rc genhtml_legend=1 00:21:07.169 --rc geninfo_all_blocks=1 00:21:07.169 --rc geninfo_unexecuted_blocks=1 00:21:07.169 00:21:07.169 ' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2081510 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2081510' 00:21:07.169 Process pid: 2081510 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2081510 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2081510 ']' 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.169 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:08.109 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.109 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:21:08.109 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 malloc0 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:21:09.050 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:41.220 Fuzzing completed. Shutting down the fuzz application 00:21:41.220 00:21:41.220 Dumping successful admin opcodes: 00:21:41.220 9, 10, 00:21:41.220 Dumping successful io opcodes: 00:21:41.220 0, 00:21:41.220 NS: 0x20000081ef00 I/O qp, Total commands completed: 245030, total successful commands: 960, random_seed: 652431296 00:21:41.220 NS: 0x20000081ef00 admin qp, Total commands completed: 49232, total successful commands: 13, random_seed: 2815088896 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2081510 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2081510 ']' 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2081510 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2081510 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.220 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2081510' 00:21:41.221 killing process with pid 2081510 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2081510 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2081510 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:41.221 00:21:41.221 real 0m33.270s 00:21:41.221 user 0m33.226s 00:21:41.221 sys 0m26.068s 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:41.221 ************************************ 00:21:41.221 END TEST nvmf_vfio_user_fuzz 00:21:41.221 ************************************ 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.221 ************************************ 00:21:41.221 START TEST nvmf_auth_target 00:21:41.221 ************************************ 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:41.221 * Looking for test storage... 00:21:41.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:41.221 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.221 --rc genhtml_branch_coverage=1 00:21:41.221 --rc genhtml_function_coverage=1 00:21:41.221 --rc genhtml_legend=1 00:21:41.221 --rc geninfo_all_blocks=1 00:21:41.221 --rc geninfo_unexecuted_blocks=1 00:21:41.221 00:21:41.221 ' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.221 --rc genhtml_branch_coverage=1 00:21:41.221 --rc genhtml_function_coverage=1 00:21:41.221 --rc genhtml_legend=1 00:21:41.221 --rc geninfo_all_blocks=1 00:21:41.221 --rc geninfo_unexecuted_blocks=1 00:21:41.221 00:21:41.221 ' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.221 --rc genhtml_branch_coverage=1 00:21:41.221 --rc genhtml_function_coverage=1 00:21:41.221 --rc genhtml_legend=1 00:21:41.221 --rc geninfo_all_blocks=1 00:21:41.221 --rc geninfo_unexecuted_blocks=1 00:21:41.221 00:21:41.221 ' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.221 --rc genhtml_branch_coverage=1 00:21:41.221 --rc genhtml_function_coverage=1 00:21:41.221 --rc genhtml_legend=1 00:21:41.221 --rc geninfo_all_blocks=1 00:21:41.221 --rc geninfo_unexecuted_blocks=1 00:21:41.221 00:21:41.221 ' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.221 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.222 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:43.761 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:43.761 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:43.761 Found net devices under 0000:84:00.0: cvl_0_0 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:43.761 Found net devices under 0000:84:00.1: cvl_0_1 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.761 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.762 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:21:43.762 00:21:43.762 --- 10.0.0.2 ping statistics --- 00:21:43.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.762 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:43.762 00:21:43.762 --- 10.0.0.1 ping statistics --- 00:21:43.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.762 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2086963 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2086963 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2086963 ']' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.762 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2086998 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eac9d1a0c669f2a4fdd26deba1b9009d63552c745a0288d6 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IXn 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eac9d1a0c669f2a4fdd26deba1b9009d63552c745a0288d6 0 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eac9d1a0c669f2a4fdd26deba1b9009d63552c745a0288d6 0 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eac9d1a0c669f2a4fdd26deba1b9009d63552c745a0288d6 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IXn 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IXn 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.IXn 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c69aa811fe56913f4f3ca8cd227550e49d82539fe7fab88ecfee39477f490af 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Hsm 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c69aa811fe56913f4f3ca8cd227550e49d82539fe7fab88ecfee39477f490af 3 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c69aa811fe56913f4f3ca8cd227550e49d82539fe7fab88ecfee39477f490af 3 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c69aa811fe56913f4f3ca8cd227550e49d82539fe7fab88ecfee39477f490af 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:44.332 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Hsm 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Hsm 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Hsm 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9771fe15742cdd56f02583ada8572cf5 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dqB 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9771fe15742cdd56f02583ada8572cf5 1 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9771fe15742cdd56f02583ada8572cf5 1 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9771fe15742cdd56f02583ada8572cf5 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dqB 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dqB 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dqB 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0c377162063feaf601e8594ad5d83e2fb8d4e83c4e728eca 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R5S 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0c377162063feaf601e8594ad5d83e2fb8d4e83c4e728eca 2 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0c377162063feaf601e8594ad5d83e2fb8d4e83c4e728eca 2 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0c377162063feaf601e8594ad5d83e2fb8d4e83c4e728eca 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R5S 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R5S 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.R5S 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:44.592 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bef18a15c5de6effa4988b2623aa05010871d697d4293e62 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EVe 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bef18a15c5de6effa4988b2623aa05010871d697d4293e62 2 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bef18a15c5de6effa4988b2623aa05010871d697d4293e62 2 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bef18a15c5de6effa4988b2623aa05010871d697d4293e62 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EVe 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EVe 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.EVe 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:44.593 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e09336e6571a48f470ef92fd9065d976 00:21:44.852 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:44.852 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZYJ 00:21:44.852 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e09336e6571a48f470ef92fd9065d976 1 00:21:44.852 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e09336e6571a48f470ef92fd9065d976 1 00:21:44.852 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e09336e6571a48f470ef92fd9065d976 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZYJ 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZYJ 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZYJ 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b185e19b4094ba7b45bad6eef979f01bd4ae97e28c18698004e3c9e2a26b9f16 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uKh 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b185e19b4094ba7b45bad6eef979f01bd4ae97e28c18698004e3c9e2a26b9f16 3 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b185e19b4094ba7b45bad6eef979f01bd4ae97e28c18698004e3c9e2a26b9f16 3 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b185e19b4094ba7b45bad6eef979f01bd4ae97e28c18698004e3c9e2a26b9f16 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uKh 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uKh 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.uKh 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2086963 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2086963 ']' 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.853 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2086998 /var/tmp/host.sock 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2086998 ']' 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:45.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.420 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IXn 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IXn 00:21:46.356 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IXn 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Hsm ]] 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hsm 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hsm 00:21:46.616 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hsm 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dqB 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dqB 00:21:47.184 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dqB 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.R5S ]] 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R5S 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R5S 00:21:47.751 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R5S 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.EVe 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.EVe 00:21:48.009 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.EVe 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZYJ ]] 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZYJ 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZYJ 00:21:48.577 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZYJ 00:21:49.142 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uKh 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uKh 00:21:49.143 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uKh 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.401 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.968 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.535 00:21:50.535 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.535 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.535 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.792 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.793 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.793 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.793 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.793 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.793 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.793 { 00:21:50.793 "cntlid": 1, 00:21:50.793 "qid": 0, 00:21:50.793 "state": "enabled", 00:21:50.793 "thread": "nvmf_tgt_poll_group_000", 00:21:50.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:50.793 "listen_address": { 00:21:50.793 "trtype": "TCP", 00:21:50.793 "adrfam": "IPv4", 00:21:50.793 "traddr": "10.0.0.2", 00:21:50.793 "trsvcid": "4420" 00:21:50.793 }, 00:21:50.793 "peer_address": { 00:21:50.793 "trtype": "TCP", 00:21:50.793 "adrfam": "IPv4", 00:21:50.793 "traddr": "10.0.0.1", 00:21:50.793 "trsvcid": "37864" 00:21:50.793 }, 00:21:50.793 "auth": { 00:21:50.793 "state": "completed", 00:21:50.793 "digest": "sha256", 00:21:50.793 "dhgroup": "null" 00:21:50.793 } 00:21:50.793 } 00:21:50.793 ]' 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.051 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.618 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:21:51.618 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:21:53.524 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.524 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:53.525 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.097 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.039 00:21:55.039 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.039 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.039 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.608 { 00:21:55.608 "cntlid": 3, 00:21:55.608 "qid": 0, 00:21:55.608 "state": "enabled", 00:21:55.608 "thread": "nvmf_tgt_poll_group_000", 00:21:55.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:55.608 "listen_address": { 00:21:55.608 "trtype": "TCP", 00:21:55.608 "adrfam": "IPv4", 00:21:55.608 "traddr": "10.0.0.2", 00:21:55.608 "trsvcid": "4420" 00:21:55.608 }, 00:21:55.608 "peer_address": { 00:21:55.608 "trtype": "TCP", 00:21:55.608 "adrfam": "IPv4", 00:21:55.608 "traddr": "10.0.0.1", 00:21:55.608 "trsvcid": "37898" 00:21:55.608 }, 00:21:55.608 "auth": { 00:21:55.608 "state": "completed", 00:21:55.608 "digest": "sha256", 00:21:55.608 "dhgroup": "null" 00:21:55.608 } 00:21:55.608 } 00:21:55.608 ]' 00:21:55.608 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.608 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.177 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:21:56.178 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.082 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.647 00:21:58.647 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.647 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.647 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.214 { 00:21:59.214 "cntlid": 5, 00:21:59.214 "qid": 0, 00:21:59.214 "state": "enabled", 00:21:59.214 "thread": "nvmf_tgt_poll_group_000", 00:21:59.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:59.214 "listen_address": { 00:21:59.214 "trtype": "TCP", 00:21:59.214 "adrfam": "IPv4", 00:21:59.214 "traddr": "10.0.0.2", 00:21:59.214 "trsvcid": "4420" 00:21:59.214 }, 00:21:59.214 "peer_address": { 00:21:59.214 "trtype": "TCP", 00:21:59.214 "adrfam": "IPv4", 00:21:59.214 "traddr": "10.0.0.1", 00:21:59.214 "trsvcid": "44648" 00:21:59.214 }, 00:21:59.214 "auth": { 00:21:59.214 "state": "completed", 00:21:59.214 "digest": "sha256", 00:21:59.214 "dhgroup": "null" 00:21:59.214 } 00:21:59.214 } 00:21:59.214 ]' 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.214 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.153 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:00.153 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:02.054 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:02.055 10:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.623 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.194 00:22:03.194 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.194 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.194 10:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.763 { 00:22:03.763 "cntlid": 7, 00:22:03.763 "qid": 0, 00:22:03.763 "state": "enabled", 00:22:03.763 "thread": "nvmf_tgt_poll_group_000", 00:22:03.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:03.763 "listen_address": { 00:22:03.763 "trtype": "TCP", 00:22:03.763 "adrfam": "IPv4", 00:22:03.763 "traddr": "10.0.0.2", 00:22:03.763 "trsvcid": "4420" 00:22:03.763 }, 00:22:03.763 "peer_address": { 00:22:03.763 "trtype": "TCP", 00:22:03.763 "adrfam": "IPv4", 00:22:03.763 "traddr": "10.0.0.1", 00:22:03.763 "trsvcid": "44664" 00:22:03.763 }, 00:22:03.763 "auth": { 00:22:03.763 "state": "completed", 00:22:03.763 "digest": "sha256", 00:22:03.763 "dhgroup": "null" 00:22:03.763 } 00:22:03.763 } 00:22:03.763 ]' 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.763 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.334 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:04.334 10:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:05.719 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.719 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.720 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.720 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.979 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.979 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.979 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.979 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:05.979 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.238 10:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.174 00:22:07.174 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.174 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.174 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.433 { 00:22:07.433 "cntlid": 9, 00:22:07.433 "qid": 0, 00:22:07.433 "state": "enabled", 00:22:07.433 "thread": "nvmf_tgt_poll_group_000", 00:22:07.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:07.433 "listen_address": { 00:22:07.433 "trtype": "TCP", 00:22:07.433 "adrfam": "IPv4", 00:22:07.433 "traddr": "10.0.0.2", 00:22:07.433 "trsvcid": "4420" 00:22:07.433 }, 00:22:07.433 "peer_address": { 00:22:07.433 "trtype": "TCP", 00:22:07.433 "adrfam": "IPv4", 00:22:07.433 "traddr": "10.0.0.1", 00:22:07.433 "trsvcid": "48872" 00:22:07.433 }, 00:22:07.433 "auth": { 00:22:07.433 "state": "completed", 00:22:07.433 "digest": "sha256", 00:22:07.433 "dhgroup": "ffdhe2048" 00:22:07.433 } 00:22:07.433 } 00:22:07.433 ]' 00:22:07.433 10:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.433 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.433 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.692 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.692 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.692 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.692 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.692 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.260 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:08.260 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.161 10:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.419 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.983 00:22:10.983 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.983 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.983 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.548 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.548 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.549 { 00:22:11.549 "cntlid": 11, 00:22:11.549 "qid": 0, 00:22:11.549 "state": "enabled", 00:22:11.549 "thread": "nvmf_tgt_poll_group_000", 00:22:11.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:11.549 "listen_address": { 00:22:11.549 "trtype": "TCP", 00:22:11.549 "adrfam": "IPv4", 00:22:11.549 "traddr": "10.0.0.2", 00:22:11.549 "trsvcid": "4420" 00:22:11.549 }, 00:22:11.549 "peer_address": { 00:22:11.549 "trtype": "TCP", 00:22:11.549 "adrfam": "IPv4", 00:22:11.549 "traddr": "10.0.0.1", 00:22:11.549 "trsvcid": "48900" 00:22:11.549 }, 00:22:11.549 "auth": { 00:22:11.549 "state": "completed", 00:22:11.549 "digest": "sha256", 00:22:11.549 "dhgroup": "ffdhe2048" 00:22:11.549 } 00:22:11.549 } 00:22:11.549 ]' 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.549 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.808 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:11.808 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.808 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.808 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.808 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.454 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:12.454 10:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.364 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.623 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.192 00:22:15.192 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.192 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.192 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.450 { 00:22:15.450 "cntlid": 13, 00:22:15.450 "qid": 0, 00:22:15.450 "state": "enabled", 00:22:15.450 "thread": "nvmf_tgt_poll_group_000", 00:22:15.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:15.450 "listen_address": { 00:22:15.450 "trtype": "TCP", 00:22:15.450 "adrfam": "IPv4", 00:22:15.450 "traddr": "10.0.0.2", 00:22:15.450 "trsvcid": "4420" 00:22:15.450 }, 00:22:15.450 "peer_address": { 00:22:15.450 "trtype": "TCP", 00:22:15.450 "adrfam": "IPv4", 00:22:15.450 "traddr": "10.0.0.1", 00:22:15.450 "trsvcid": "48936" 00:22:15.450 }, 00:22:15.450 "auth": { 00:22:15.450 "state": "completed", 00:22:15.450 "digest": "sha256", 00:22:15.450 "dhgroup": "ffdhe2048" 00:22:15.450 } 00:22:15.450 } 00:22:15.450 ]' 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.450 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.709 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:15.709 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.709 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.709 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.709 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.987 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:15.987 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:17.893 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.152 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.719 00:22:18.719 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.719 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.719 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.977 { 00:22:18.977 "cntlid": 15, 00:22:18.977 "qid": 0, 00:22:18.977 "state": "enabled", 00:22:18.977 "thread": "nvmf_tgt_poll_group_000", 00:22:18.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:18.977 "listen_address": { 00:22:18.977 "trtype": "TCP", 00:22:18.977 "adrfam": "IPv4", 00:22:18.977 "traddr": "10.0.0.2", 00:22:18.977 "trsvcid": "4420" 00:22:18.977 }, 00:22:18.977 "peer_address": { 00:22:18.977 "trtype": "TCP", 00:22:18.977 "adrfam": "IPv4", 00:22:18.977 "traddr": "10.0.0.1", 00:22:18.977 "trsvcid": "49478" 00:22:18.977 }, 00:22:18.977 "auth": { 00:22:18.977 "state": "completed", 00:22:18.977 "digest": "sha256", 00:22:18.977 "dhgroup": "ffdhe2048" 00:22:18.977 } 00:22:18.977 } 00:22:18.977 ]' 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.977 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.235 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.235 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.235 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.235 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.235 10:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.801 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:19.801 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.703 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.704 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:21.704 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.637 10:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.206 00:22:23.206 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.206 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.206 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.464 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.465 { 00:22:23.465 "cntlid": 17, 00:22:23.465 "qid": 0, 00:22:23.465 "state": "enabled", 00:22:23.465 "thread": "nvmf_tgt_poll_group_000", 00:22:23.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:23.465 "listen_address": { 00:22:23.465 "trtype": "TCP", 00:22:23.465 "adrfam": "IPv4", 00:22:23.465 "traddr": "10.0.0.2", 00:22:23.465 "trsvcid": "4420" 00:22:23.465 }, 00:22:23.465 "peer_address": { 00:22:23.465 "trtype": "TCP", 00:22:23.465 "adrfam": "IPv4", 00:22:23.465 "traddr": "10.0.0.1", 00:22:23.465 "trsvcid": "49514" 00:22:23.465 }, 00:22:23.465 "auth": { 00:22:23.465 "state": "completed", 00:22:23.465 "digest": "sha256", 00:22:23.465 "dhgroup": "ffdhe3072" 00:22:23.465 } 00:22:23.465 } 00:22:23.465 ]' 00:22:23.465 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.725 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.293 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:24.293 10:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:26.199 10:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.769 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.029 00:22:27.287 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.287 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.287 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.854 { 00:22:27.854 "cntlid": 19, 00:22:27.854 "qid": 0, 00:22:27.854 "state": "enabled", 00:22:27.854 "thread": "nvmf_tgt_poll_group_000", 00:22:27.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:27.854 "listen_address": { 00:22:27.854 "trtype": "TCP", 00:22:27.854 "adrfam": "IPv4", 00:22:27.854 "traddr": "10.0.0.2", 00:22:27.854 "trsvcid": "4420" 00:22:27.854 }, 00:22:27.854 "peer_address": { 00:22:27.854 "trtype": "TCP", 00:22:27.854 "adrfam": "IPv4", 00:22:27.854 "traddr": "10.0.0.1", 00:22:27.854 "trsvcid": "47352" 00:22:27.854 }, 00:22:27.854 "auth": { 00:22:27.854 "state": "completed", 00:22:27.854 "digest": "sha256", 00:22:27.854 "dhgroup": "ffdhe3072" 00:22:27.854 } 00:22:27.854 } 00:22:27.854 ]' 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.854 10:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.788 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:28.789 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.687 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.943 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.200 00:22:31.457 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.457 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.457 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.716 { 00:22:31.716 "cntlid": 21, 00:22:31.716 "qid": 0, 00:22:31.716 "state": "enabled", 00:22:31.716 "thread": "nvmf_tgt_poll_group_000", 00:22:31.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:31.716 "listen_address": { 00:22:31.716 "trtype": "TCP", 00:22:31.716 "adrfam": "IPv4", 00:22:31.716 "traddr": "10.0.0.2", 00:22:31.716 "trsvcid": "4420" 00:22:31.716 }, 00:22:31.716 "peer_address": { 00:22:31.716 "trtype": "TCP", 00:22:31.716 "adrfam": "IPv4", 00:22:31.716 "traddr": "10.0.0.1", 00:22:31.716 "trsvcid": "47376" 00:22:31.716 }, 00:22:31.716 "auth": { 00:22:31.716 "state": "completed", 00:22:31.716 "digest": "sha256", 00:22:31.716 "dhgroup": "ffdhe3072" 00:22:31.716 } 00:22:31.716 } 00:22:31.716 ]' 00:22:31.716 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.975 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.541 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:32.541 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:34.446 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.013 10:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.581 00:22:35.581 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.581 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.581 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.840 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.840 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.840 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.840 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.840 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.100 { 00:22:36.100 "cntlid": 23, 00:22:36.100 "qid": 0, 00:22:36.100 "state": "enabled", 00:22:36.100 "thread": "nvmf_tgt_poll_group_000", 00:22:36.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:36.100 "listen_address": { 00:22:36.100 "trtype": "TCP", 00:22:36.100 "adrfam": "IPv4", 00:22:36.100 "traddr": "10.0.0.2", 00:22:36.100 "trsvcid": "4420" 00:22:36.100 }, 00:22:36.100 "peer_address": { 00:22:36.100 "trtype": "TCP", 00:22:36.100 "adrfam": "IPv4", 00:22:36.100 "traddr": "10.0.0.1", 00:22:36.100 "trsvcid": "53858" 00:22:36.100 }, 00:22:36.100 "auth": { 00:22:36.100 "state": "completed", 00:22:36.100 "digest": "sha256", 00:22:36.100 "dhgroup": "ffdhe3072" 00:22:36.100 } 00:22:36.100 } 00:22:36.100 ]' 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.100 10:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.670 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:36.670 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.577 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.218 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.785 00:22:39.785 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.785 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.785 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.352 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.352 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.353 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.353 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.353 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.353 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.353 { 00:22:40.353 "cntlid": 25, 00:22:40.353 "qid": 0, 00:22:40.353 "state": "enabled", 00:22:40.353 "thread": "nvmf_tgt_poll_group_000", 00:22:40.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:40.353 "listen_address": { 00:22:40.353 "trtype": "TCP", 00:22:40.353 "adrfam": "IPv4", 00:22:40.353 "traddr": "10.0.0.2", 00:22:40.353 "trsvcid": "4420" 00:22:40.353 }, 00:22:40.353 "peer_address": { 00:22:40.353 "trtype": "TCP", 00:22:40.353 "adrfam": "IPv4", 00:22:40.353 "traddr": "10.0.0.1", 00:22:40.353 "trsvcid": "53888" 00:22:40.353 }, 00:22:40.353 "auth": { 00:22:40.353 "state": "completed", 00:22:40.353 "digest": "sha256", 00:22:40.353 "dhgroup": "ffdhe4096" 00:22:40.353 } 00:22:40.353 } 00:22:40.353 ]' 00:22:40.353 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.180 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:41.180 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:43.086 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.344 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.345 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.911 00:22:43.911 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.911 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.911 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.477 { 00:22:44.477 "cntlid": 27, 00:22:44.477 "qid": 0, 00:22:44.477 "state": "enabled", 00:22:44.477 "thread": "nvmf_tgt_poll_group_000", 00:22:44.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:44.477 "listen_address": { 00:22:44.477 "trtype": "TCP", 00:22:44.477 "adrfam": "IPv4", 00:22:44.477 "traddr": "10.0.0.2", 00:22:44.477 "trsvcid": "4420" 00:22:44.477 }, 00:22:44.477 "peer_address": { 00:22:44.477 "trtype": "TCP", 00:22:44.477 "adrfam": "IPv4", 00:22:44.477 "traddr": "10.0.0.1", 00:22:44.477 "trsvcid": "53908" 00:22:44.477 }, 00:22:44.477 "auth": { 00:22:44.477 "state": "completed", 00:22:44.477 "digest": "sha256", 00:22:44.477 "dhgroup": "ffdhe4096" 00:22:44.477 } 00:22:44.477 } 00:22:44.477 ]' 00:22:44.477 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.477 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:44.477 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.477 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.477 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.736 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.736 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.736 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.994 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:44.994 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:22:46.897 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:46.898 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.158 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.809 00:22:47.809 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.809 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.809 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.068 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.068 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.068 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.068 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.325 { 00:22:48.325 "cntlid": 29, 00:22:48.325 "qid": 0, 00:22:48.325 "state": "enabled", 00:22:48.325 "thread": "nvmf_tgt_poll_group_000", 00:22:48.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:48.325 "listen_address": { 00:22:48.325 "trtype": "TCP", 00:22:48.325 "adrfam": "IPv4", 00:22:48.325 "traddr": "10.0.0.2", 00:22:48.325 "trsvcid": "4420" 00:22:48.325 }, 00:22:48.325 "peer_address": { 00:22:48.325 "trtype": "TCP", 00:22:48.325 "adrfam": "IPv4", 00:22:48.325 "traddr": "10.0.0.1", 00:22:48.325 "trsvcid": "57878" 00:22:48.325 }, 00:22:48.325 "auth": { 00:22:48.325 "state": "completed", 00:22:48.325 "digest": "sha256", 00:22:48.325 "dhgroup": "ffdhe4096" 00:22:48.325 } 00:22:48.325 } 00:22:48.325 ]' 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.325 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.260 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:49.260 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.162 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.099 00:22:52.099 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.099 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.099 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.667 { 00:22:52.667 "cntlid": 31, 00:22:52.667 "qid": 0, 00:22:52.667 "state": "enabled", 00:22:52.667 "thread": "nvmf_tgt_poll_group_000", 00:22:52.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:52.667 "listen_address": { 00:22:52.667 "trtype": "TCP", 00:22:52.667 "adrfam": "IPv4", 00:22:52.667 "traddr": "10.0.0.2", 00:22:52.667 "trsvcid": "4420" 00:22:52.667 }, 00:22:52.667 "peer_address": { 00:22:52.667 "trtype": "TCP", 00:22:52.667 "adrfam": "IPv4", 00:22:52.667 "traddr": "10.0.0.1", 00:22:52.667 "trsvcid": "57920" 00:22:52.667 }, 00:22:52.667 "auth": { 00:22:52.667 "state": "completed", 00:22:52.667 "digest": "sha256", 00:22:52.667 "dhgroup": "ffdhe4096" 00:22:52.667 } 00:22:52.667 } 00:22:52.667 ]' 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.667 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.235 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:53.235 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:22:55.141 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:55.142 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.708 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.659 00:22:56.659 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.659 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.659 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.224 { 00:22:57.224 "cntlid": 33, 00:22:57.224 "qid": 0, 00:22:57.224 "state": "enabled", 00:22:57.224 "thread": "nvmf_tgt_poll_group_000", 00:22:57.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:57.224 "listen_address": { 00:22:57.224 "trtype": "TCP", 00:22:57.224 "adrfam": "IPv4", 00:22:57.224 "traddr": "10.0.0.2", 00:22:57.224 "trsvcid": "4420" 00:22:57.224 }, 00:22:57.224 "peer_address": { 00:22:57.224 "trtype": "TCP", 00:22:57.224 "adrfam": "IPv4", 00:22:57.224 "traddr": "10.0.0.1", 00:22:57.224 "trsvcid": "57104" 00:22:57.224 }, 00:22:57.224 "auth": { 00:22:57.224 "state": "completed", 00:22:57.224 "digest": "sha256", 00:22:57.224 "dhgroup": "ffdhe6144" 00:22:57.224 } 00:22:57.224 } 00:22:57.224 ]' 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.224 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.790 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:57.790 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:59.693 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.264 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.646 00:23:01.646 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.646 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.646 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.646 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.646 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.646 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.646 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.906 { 00:23:01.906 "cntlid": 35, 00:23:01.906 "qid": 0, 00:23:01.906 "state": "enabled", 00:23:01.906 "thread": "nvmf_tgt_poll_group_000", 00:23:01.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:01.906 "listen_address": { 00:23:01.906 "trtype": "TCP", 00:23:01.906 "adrfam": "IPv4", 00:23:01.906 "traddr": "10.0.0.2", 00:23:01.906 "trsvcid": "4420" 00:23:01.906 }, 00:23:01.906 "peer_address": { 00:23:01.906 "trtype": "TCP", 00:23:01.906 "adrfam": "IPv4", 00:23:01.906 "traddr": "10.0.0.1", 00:23:01.906 "trsvcid": "57126" 00:23:01.906 }, 00:23:01.906 "auth": { 00:23:01.906 "state": "completed", 00:23:01.906 "digest": "sha256", 00:23:01.906 "dhgroup": "ffdhe6144" 00:23:01.906 } 00:23:01.906 } 00:23:01.906 ]' 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.906 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.472 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:02.472 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.382 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.641 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.577 00:23:05.837 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.837 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.837 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.404 { 00:23:06.404 "cntlid": 37, 00:23:06.404 "qid": 0, 00:23:06.404 "state": "enabled", 00:23:06.404 "thread": "nvmf_tgt_poll_group_000", 00:23:06.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:06.404 "listen_address": { 00:23:06.404 "trtype": "TCP", 00:23:06.404 "adrfam": "IPv4", 00:23:06.404 "traddr": "10.0.0.2", 00:23:06.404 "trsvcid": "4420" 00:23:06.404 }, 00:23:06.404 "peer_address": { 00:23:06.404 "trtype": "TCP", 00:23:06.404 "adrfam": "IPv4", 00:23:06.404 "traddr": "10.0.0.1", 00:23:06.404 "trsvcid": "57140" 00:23:06.404 }, 00:23:06.404 "auth": { 00:23:06.404 "state": "completed", 00:23:06.404 "digest": "sha256", 00:23:06.404 "dhgroup": "ffdhe6144" 00:23:06.404 } 00:23:06.404 } 00:23:06.404 ]' 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:06.404 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.404 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.404 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.404 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.676 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:06.676 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.579 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.858 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.240 00:23:10.240 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.241 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.241 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.809 { 00:23:10.809 "cntlid": 39, 00:23:10.809 "qid": 0, 00:23:10.809 "state": "enabled", 00:23:10.809 "thread": "nvmf_tgt_poll_group_000", 00:23:10.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:10.809 "listen_address": { 00:23:10.809 "trtype": "TCP", 00:23:10.809 "adrfam": "IPv4", 00:23:10.809 "traddr": "10.0.0.2", 00:23:10.809 "trsvcid": "4420" 00:23:10.809 }, 00:23:10.809 "peer_address": { 00:23:10.809 "trtype": "TCP", 00:23:10.809 "adrfam": "IPv4", 00:23:10.809 "traddr": "10.0.0.1", 00:23:10.809 "trsvcid": "56322" 00:23:10.809 }, 00:23:10.809 "auth": { 00:23:10.809 "state": "completed", 00:23:10.809 "digest": "sha256", 00:23:10.809 "dhgroup": "ffdhe6144" 00:23:10.809 } 00:23:10.809 } 00:23:10.809 ]' 00:23:10.809 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.068 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.635 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:11.635 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.542 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.110 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.490 00:23:15.748 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.748 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.748 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.006 { 00:23:16.006 "cntlid": 41, 00:23:16.006 "qid": 0, 00:23:16.006 "state": "enabled", 00:23:16.006 "thread": "nvmf_tgt_poll_group_000", 00:23:16.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:16.006 "listen_address": { 00:23:16.006 "trtype": "TCP", 00:23:16.006 "adrfam": "IPv4", 00:23:16.006 "traddr": "10.0.0.2", 00:23:16.006 "trsvcid": "4420" 00:23:16.006 }, 00:23:16.006 "peer_address": { 00:23:16.006 "trtype": "TCP", 00:23:16.006 "adrfam": "IPv4", 00:23:16.006 "traddr": "10.0.0.1", 00:23:16.006 "trsvcid": "56350" 00:23:16.006 }, 00:23:16.006 "auth": { 00:23:16.006 "state": "completed", 00:23:16.006 "digest": "sha256", 00:23:16.006 "dhgroup": "ffdhe8192" 00:23:16.006 } 00:23:16.006 } 00:23:16.006 ]' 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.006 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.938 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:16.938 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.835 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.094 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.000 00:23:21.000 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.000 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.000 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.259 { 00:23:21.259 "cntlid": 43, 00:23:21.259 "qid": 0, 00:23:21.259 "state": "enabled", 00:23:21.259 "thread": "nvmf_tgt_poll_group_000", 00:23:21.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:21.259 "listen_address": { 00:23:21.259 "trtype": "TCP", 00:23:21.259 "adrfam": "IPv4", 00:23:21.259 "traddr": "10.0.0.2", 00:23:21.259 "trsvcid": "4420" 00:23:21.259 }, 00:23:21.259 "peer_address": { 00:23:21.259 "trtype": "TCP", 00:23:21.259 "adrfam": "IPv4", 00:23:21.259 "traddr": "10.0.0.1", 00:23:21.259 "trsvcid": "37808" 00:23:21.259 }, 00:23:21.259 "auth": { 00:23:21.259 "state": "completed", 00:23:21.259 "digest": "sha256", 00:23:21.259 "dhgroup": "ffdhe8192" 00:23:21.259 } 00:23:21.259 } 00:23:21.259 ]' 00:23:21.259 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.519 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:21.519 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.519 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:21.519 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.519 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.519 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.519 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.172 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:22.172 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.079 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.647 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.025 00:23:26.025 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.025 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.025 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.591 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.592 { 00:23:26.592 "cntlid": 45, 00:23:26.592 "qid": 0, 00:23:26.592 "state": "enabled", 00:23:26.592 "thread": "nvmf_tgt_poll_group_000", 00:23:26.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:26.592 "listen_address": { 00:23:26.592 "trtype": "TCP", 00:23:26.592 "adrfam": "IPv4", 00:23:26.592 "traddr": "10.0.0.2", 00:23:26.592 "trsvcid": "4420" 00:23:26.592 }, 00:23:26.592 "peer_address": { 00:23:26.592 "trtype": "TCP", 00:23:26.592 "adrfam": "IPv4", 00:23:26.592 "traddr": "10.0.0.1", 00:23:26.592 "trsvcid": "37830" 00:23:26.592 }, 00:23:26.592 "auth": { 00:23:26.592 "state": "completed", 00:23:26.592 "digest": "sha256", 00:23:26.592 "dhgroup": "ffdhe8192" 00:23:26.592 } 00:23:26.592 } 00:23:26.592 ]' 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.592 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.530 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:27.530 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.432 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:30.001 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:23:30.001 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.001 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:30.001 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:30.002 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:31.909 00:23:31.909 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.909 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.909 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.475 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.475 { 00:23:32.475 "cntlid": 47, 00:23:32.475 "qid": 0, 00:23:32.476 "state": "enabled", 00:23:32.476 "thread": "nvmf_tgt_poll_group_000", 00:23:32.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:32.476 "listen_address": { 00:23:32.476 "trtype": "TCP", 00:23:32.476 "adrfam": "IPv4", 00:23:32.476 "traddr": "10.0.0.2", 00:23:32.476 "trsvcid": "4420" 00:23:32.476 }, 00:23:32.476 "peer_address": { 00:23:32.476 "trtype": "TCP", 00:23:32.476 "adrfam": "IPv4", 00:23:32.476 "traddr": "10.0.0.1", 00:23:32.476 "trsvcid": "41986" 00:23:32.476 }, 00:23:32.476 "auth": { 00:23:32.476 "state": "completed", 00:23:32.476 "digest": "sha256", 00:23:32.476 "dhgroup": "ffdhe8192" 00:23:32.476 } 00:23:32.476 } 00:23:32.476 ]' 00:23:32.476 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.476 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:32.476 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.476 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.476 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.734 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.734 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.734 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.992 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:32.992 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:34.898 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.464 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.031 00:23:36.031 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.031 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.031 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.598 { 00:23:36.598 "cntlid": 49, 00:23:36.598 "qid": 0, 00:23:36.598 "state": "enabled", 00:23:36.598 "thread": "nvmf_tgt_poll_group_000", 00:23:36.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:36.598 "listen_address": { 00:23:36.598 "trtype": "TCP", 00:23:36.598 "adrfam": "IPv4", 00:23:36.598 "traddr": "10.0.0.2", 00:23:36.598 "trsvcid": "4420" 00:23:36.598 }, 00:23:36.598 "peer_address": { 00:23:36.598 "trtype": "TCP", 00:23:36.598 "adrfam": "IPv4", 00:23:36.598 "traddr": "10.0.0.1", 00:23:36.598 "trsvcid": "37594" 00:23:36.598 }, 00:23:36.598 "auth": { 00:23:36.598 "state": "completed", 00:23:36.598 "digest": "sha384", 00:23:36.598 "dhgroup": "null" 00:23:36.598 } 00:23:36.598 } 00:23:36.598 ]' 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.598 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.857 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:36.857 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.857 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.857 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.857 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.426 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:37.426 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:39.331 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.900 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.472 00:23:40.472 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:40.472 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:40.472 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.050 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.050 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.051 { 00:23:41.051 "cntlid": 51, 00:23:41.051 "qid": 0, 00:23:41.051 "state": "enabled", 00:23:41.051 "thread": "nvmf_tgt_poll_group_000", 00:23:41.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:41.051 "listen_address": { 00:23:41.051 "trtype": "TCP", 00:23:41.051 "adrfam": "IPv4", 00:23:41.051 "traddr": "10.0.0.2", 00:23:41.051 "trsvcid": "4420" 00:23:41.051 }, 00:23:41.051 "peer_address": { 00:23:41.051 "trtype": "TCP", 00:23:41.051 "adrfam": "IPv4", 00:23:41.051 "traddr": "10.0.0.1", 00:23:41.051 "trsvcid": "37620" 00:23:41.051 }, 00:23:41.051 "auth": { 00:23:41.051 "state": "completed", 00:23:41.051 "digest": "sha384", 00:23:41.051 "dhgroup": "null" 00:23:41.051 } 00:23:41.051 } 00:23:41.051 ]' 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.051 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.619 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:41.619 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.156 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.093 00:23:45.093 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:45.093 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.093 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.661 { 00:23:45.661 "cntlid": 53, 00:23:45.661 "qid": 0, 00:23:45.661 "state": "enabled", 00:23:45.661 "thread": "nvmf_tgt_poll_group_000", 00:23:45.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:45.661 "listen_address": { 00:23:45.661 "trtype": "TCP", 00:23:45.661 "adrfam": "IPv4", 00:23:45.661 "traddr": "10.0.0.2", 00:23:45.661 "trsvcid": "4420" 00:23:45.661 }, 00:23:45.661 "peer_address": { 00:23:45.661 "trtype": "TCP", 00:23:45.661 "adrfam": "IPv4", 00:23:45.661 "traddr": "10.0.0.1", 00:23:45.661 "trsvcid": "37658" 00:23:45.661 }, 00:23:45.661 "auth": { 00:23:45.661 "state": "completed", 00:23:45.661 "digest": "sha384", 00:23:45.661 "dhgroup": "null" 00:23:45.661 } 00:23:45.661 } 00:23:45.661 ]' 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.661 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.228 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:46.228 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:48.133 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:48.392 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:48.960 00:23:48.960 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.960 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.960 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.529 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.529 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.529 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.529 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.529 { 00:23:49.529 "cntlid": 55, 00:23:49.529 "qid": 0, 00:23:49.529 "state": "enabled", 00:23:49.529 "thread": "nvmf_tgt_poll_group_000", 00:23:49.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:49.529 "listen_address": { 00:23:49.529 "trtype": "TCP", 00:23:49.529 "adrfam": "IPv4", 00:23:49.529 "traddr": "10.0.0.2", 00:23:49.529 "trsvcid": "4420" 00:23:49.529 }, 00:23:49.529 "peer_address": { 00:23:49.529 "trtype": "TCP", 00:23:49.529 "adrfam": "IPv4", 00:23:49.529 "traddr": "10.0.0.1", 00:23:49.529 "trsvcid": "56702" 00:23:49.529 }, 00:23:49.529 "auth": { 00:23:49.529 "state": "completed", 00:23:49.529 "digest": "sha384", 00:23:49.529 "dhgroup": "null" 00:23:49.529 } 00:23:49.529 } 00:23:49.529 ]' 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.529 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.099 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:50.099 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:52.008 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.576 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.144 00:23:53.144 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:53.144 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:53.144 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:53.712 { 00:23:53.712 "cntlid": 57, 00:23:53.712 "qid": 0, 00:23:53.712 "state": "enabled", 00:23:53.712 "thread": "nvmf_tgt_poll_group_000", 00:23:53.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:53.712 "listen_address": { 00:23:53.712 "trtype": "TCP", 00:23:53.712 "adrfam": "IPv4", 00:23:53.712 "traddr": "10.0.0.2", 00:23:53.712 "trsvcid": "4420" 00:23:53.712 }, 00:23:53.712 "peer_address": { 00:23:53.712 "trtype": "TCP", 00:23:53.712 "adrfam": "IPv4", 00:23:53.712 "traddr": "10.0.0.1", 00:23:53.712 "trsvcid": "56734" 00:23:53.712 }, 00:23:53.712 "auth": { 00:23:53.712 "state": "completed", 00:23:53.712 "digest": "sha384", 00:23:53.712 "dhgroup": "ffdhe2048" 00:23:53.712 } 00:23:53.712 } 00:23:53.712 ]' 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:53.712 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.969 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:53.969 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.969 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.969 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.969 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.228 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:54.228 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:23:56.138 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:56.397 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.657 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.658 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.276 00:23:57.560 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.560 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.560 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:57.819 { 00:23:57.819 "cntlid": 59, 00:23:57.819 "qid": 0, 00:23:57.819 "state": "enabled", 00:23:57.819 "thread": "nvmf_tgt_poll_group_000", 00:23:57.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:23:57.819 "listen_address": { 00:23:57.819 "trtype": "TCP", 00:23:57.819 "adrfam": "IPv4", 00:23:57.819 "traddr": "10.0.0.2", 00:23:57.819 "trsvcid": "4420" 00:23:57.819 }, 00:23:57.819 "peer_address": { 00:23:57.819 "trtype": "TCP", 00:23:57.819 "adrfam": "IPv4", 00:23:57.819 "traddr": "10.0.0.1", 00:23:57.819 "trsvcid": "40950" 00:23:57.819 }, 00:23:57.819 "auth": { 00:23:57.819 "state": "completed", 00:23:57.819 "digest": "sha384", 00:23:57.819 "dhgroup": "ffdhe2048" 00:23:57.819 } 00:23:57.819 } 00:23:57.819 ]' 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.819 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.385 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:23:58.385 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.295 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.870 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.438 00:24:01.438 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.438 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.438 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.697 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:01.698 { 00:24:01.698 "cntlid": 61, 00:24:01.698 "qid": 0, 00:24:01.698 "state": "enabled", 00:24:01.698 "thread": "nvmf_tgt_poll_group_000", 00:24:01.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:01.698 "listen_address": { 00:24:01.698 "trtype": "TCP", 00:24:01.698 "adrfam": "IPv4", 00:24:01.698 "traddr": "10.0.0.2", 00:24:01.698 "trsvcid": "4420" 00:24:01.698 }, 00:24:01.698 "peer_address": { 00:24:01.698 "trtype": "TCP", 00:24:01.698 "adrfam": "IPv4", 00:24:01.698 "traddr": "10.0.0.1", 00:24:01.698 "trsvcid": "40962" 00:24:01.698 }, 00:24:01.698 "auth": { 00:24:01.698 "state": "completed", 00:24:01.698 "digest": "sha384", 00:24:01.698 "dhgroup": "ffdhe2048" 00:24:01.698 } 00:24:01.698 } 00:24:01.698 ]' 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:01.698 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:01.957 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.957 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:01.957 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.957 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.958 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.523 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:02.523 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.432 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:04.998 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.257 00:24:05.257 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:05.257 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:05.257 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.190 { 00:24:06.190 "cntlid": 63, 00:24:06.190 "qid": 0, 00:24:06.190 "state": "enabled", 00:24:06.190 "thread": "nvmf_tgt_poll_group_000", 00:24:06.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:06.190 "listen_address": { 00:24:06.190 "trtype": "TCP", 00:24:06.190 "adrfam": "IPv4", 00:24:06.190 "traddr": "10.0.0.2", 00:24:06.190 "trsvcid": "4420" 00:24:06.190 }, 00:24:06.190 "peer_address": { 00:24:06.190 "trtype": "TCP", 00:24:06.190 "adrfam": "IPv4", 00:24:06.190 "traddr": "10.0.0.1", 00:24:06.190 "trsvcid": "40978" 00:24:06.190 }, 00:24:06.190 "auth": { 00:24:06.190 "state": "completed", 00:24:06.190 "digest": "sha384", 00:24:06.190 "dhgroup": "ffdhe2048" 00:24:06.190 } 00:24:06.190 } 00:24:06.190 ]' 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.190 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.447 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:06.447 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:08.350 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.610 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.550 00:24:09.550 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:09.550 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:09.550 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:10.121 { 00:24:10.121 "cntlid": 65, 00:24:10.121 "qid": 0, 00:24:10.121 "state": "enabled", 00:24:10.121 "thread": "nvmf_tgt_poll_group_000", 00:24:10.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:10.121 "listen_address": { 00:24:10.121 "trtype": "TCP", 00:24:10.121 "adrfam": "IPv4", 00:24:10.121 "traddr": "10.0.0.2", 00:24:10.121 "trsvcid": "4420" 00:24:10.121 }, 00:24:10.121 "peer_address": { 00:24:10.121 "trtype": "TCP", 00:24:10.121 "adrfam": "IPv4", 00:24:10.121 "traddr": "10.0.0.1", 00:24:10.121 "trsvcid": "41590" 00:24:10.121 }, 00:24:10.121 "auth": { 00:24:10.121 "state": "completed", 00:24:10.121 "digest": "sha384", 00:24:10.121 "dhgroup": "ffdhe3072" 00:24:10.121 } 00:24:10.121 } 00:24:10.121 ]' 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:10.121 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:10.382 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:10.382 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.382 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.382 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.956 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:10.956 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:12.335 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.594 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:12.594 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.594 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.594 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.594 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:12.594 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.594 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.163 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.100 00:24:14.100 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:14.100 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:14.100 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.359 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.359 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.359 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.359 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.359 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:14.360 { 00:24:14.360 "cntlid": 67, 00:24:14.360 "qid": 0, 00:24:14.360 "state": "enabled", 00:24:14.360 "thread": "nvmf_tgt_poll_group_000", 00:24:14.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:14.360 "listen_address": { 00:24:14.360 "trtype": "TCP", 00:24:14.360 "adrfam": "IPv4", 00:24:14.360 "traddr": "10.0.0.2", 00:24:14.360 "trsvcid": "4420" 00:24:14.360 }, 00:24:14.360 "peer_address": { 00:24:14.360 "trtype": "TCP", 00:24:14.360 "adrfam": "IPv4", 00:24:14.360 "traddr": "10.0.0.1", 00:24:14.360 "trsvcid": "41612" 00:24:14.360 }, 00:24:14.360 "auth": { 00:24:14.360 "state": "completed", 00:24:14.360 "digest": "sha384", 00:24:14.360 "dhgroup": "ffdhe3072" 00:24:14.360 } 00:24:14.360 } 00:24:14.360 ]' 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:14.360 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:14.617 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.617 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.617 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.876 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:14.876 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:16.784 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.351 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.352 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.352 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.352 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.352 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.352 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.922 00:24:17.922 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:17.922 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:17.922 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:18.491 { 00:24:18.491 "cntlid": 69, 00:24:18.491 "qid": 0, 00:24:18.491 "state": "enabled", 00:24:18.491 "thread": "nvmf_tgt_poll_group_000", 00:24:18.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:18.491 "listen_address": { 00:24:18.491 "trtype": "TCP", 00:24:18.491 "adrfam": "IPv4", 00:24:18.491 "traddr": "10.0.0.2", 00:24:18.491 "trsvcid": "4420" 00:24:18.491 }, 00:24:18.491 "peer_address": { 00:24:18.491 "trtype": "TCP", 00:24:18.491 "adrfam": "IPv4", 00:24:18.491 "traddr": "10.0.0.1", 00:24:18.491 "trsvcid": "48704" 00:24:18.491 }, 00:24:18.491 "auth": { 00:24:18.491 "state": "completed", 00:24:18.491 "digest": "sha384", 00:24:18.491 "dhgroup": "ffdhe3072" 00:24:18.491 } 00:24:18.491 } 00:24:18.491 ]' 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:18.491 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:18.491 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:18.491 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:18.491 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:18.491 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:18.491 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.750 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:18.750 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:20.658 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:21.226 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:21.794 00:24:21.794 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:21.794 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:21.794 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.360 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:22.360 { 00:24:22.360 "cntlid": 71, 00:24:22.360 "qid": 0, 00:24:22.360 "state": "enabled", 00:24:22.360 "thread": "nvmf_tgt_poll_group_000", 00:24:22.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:22.360 "listen_address": { 00:24:22.361 "trtype": "TCP", 00:24:22.361 "adrfam": "IPv4", 00:24:22.361 "traddr": "10.0.0.2", 00:24:22.361 "trsvcid": "4420" 00:24:22.361 }, 00:24:22.361 "peer_address": { 00:24:22.361 "trtype": "TCP", 00:24:22.361 "adrfam": "IPv4", 00:24:22.361 "traddr": "10.0.0.1", 00:24:22.361 "trsvcid": "48730" 00:24:22.361 }, 00:24:22.361 "auth": { 00:24:22.361 "state": "completed", 00:24:22.361 "digest": "sha384", 00:24:22.361 "dhgroup": "ffdhe3072" 00:24:22.361 } 00:24:22.361 } 00:24:22.361 ]' 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.361 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.930 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:22.930 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:24.836 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.400 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.970 00:24:25.970 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:25.970 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:25.970 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.542 { 00:24:26.542 "cntlid": 73, 00:24:26.542 "qid": 0, 00:24:26.542 "state": "enabled", 00:24:26.542 "thread": "nvmf_tgt_poll_group_000", 00:24:26.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:26.542 "listen_address": { 00:24:26.542 "trtype": "TCP", 00:24:26.542 "adrfam": "IPv4", 00:24:26.542 "traddr": "10.0.0.2", 00:24:26.542 "trsvcid": "4420" 00:24:26.542 }, 00:24:26.542 "peer_address": { 00:24:26.542 "trtype": "TCP", 00:24:26.542 "adrfam": "IPv4", 00:24:26.542 "traddr": "10.0.0.1", 00:24:26.542 "trsvcid": "54328" 00:24:26.542 }, 00:24:26.542 "auth": { 00:24:26.542 "state": "completed", 00:24:26.542 "digest": "sha384", 00:24:26.542 "dhgroup": "ffdhe4096" 00:24:26.542 } 00:24:26.542 } 00:24:26.542 ]' 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.542 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.480 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:27.480 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:29.391 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.650 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.589 00:24:30.589 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:30.589 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:30.589 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:31.160 { 00:24:31.160 "cntlid": 75, 00:24:31.160 "qid": 0, 00:24:31.160 "state": "enabled", 00:24:31.160 "thread": "nvmf_tgt_poll_group_000", 00:24:31.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:31.160 "listen_address": { 00:24:31.160 "trtype": "TCP", 00:24:31.160 "adrfam": "IPv4", 00:24:31.160 "traddr": "10.0.0.2", 00:24:31.160 "trsvcid": "4420" 00:24:31.160 }, 00:24:31.160 "peer_address": { 00:24:31.160 "trtype": "TCP", 00:24:31.160 "adrfam": "IPv4", 00:24:31.160 "traddr": "10.0.0.1", 00:24:31.160 "trsvcid": "54364" 00:24:31.160 }, 00:24:31.160 "auth": { 00:24:31.160 "state": "completed", 00:24:31.160 "digest": "sha384", 00:24:31.160 "dhgroup": "ffdhe4096" 00:24:31.160 } 00:24:31.160 } 00:24:31.160 ]' 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.160 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:31.756 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:31.756 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:33.659 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.659 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:33.659 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.659 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.659 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.660 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:33.660 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:33.660 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.276 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.535 00:24:34.796 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:34.796 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:34.796 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.055 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.055 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.055 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.055 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:35.313 { 00:24:35.313 "cntlid": 77, 00:24:35.313 "qid": 0, 00:24:35.313 "state": "enabled", 00:24:35.313 "thread": "nvmf_tgt_poll_group_000", 00:24:35.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:35.313 "listen_address": { 00:24:35.313 "trtype": "TCP", 00:24:35.313 "adrfam": "IPv4", 00:24:35.313 "traddr": "10.0.0.2", 00:24:35.313 "trsvcid": "4420" 00:24:35.313 }, 00:24:35.313 "peer_address": { 00:24:35.313 "trtype": "TCP", 00:24:35.313 "adrfam": "IPv4", 00:24:35.313 "traddr": "10.0.0.1", 00:24:35.313 "trsvcid": "54388" 00:24:35.313 }, 00:24:35.313 "auth": { 00:24:35.313 "state": "completed", 00:24:35.313 "digest": "sha384", 00:24:35.313 "dhgroup": "ffdhe4096" 00:24:35.313 } 00:24:35.313 } 00:24:35.313 ]' 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:35.313 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.879 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:35.879 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:37.807 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.066 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:38.634 00:24:38.634 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:38.634 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.635 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:39.201 { 00:24:39.201 "cntlid": 79, 00:24:39.201 "qid": 0, 00:24:39.201 "state": "enabled", 00:24:39.201 "thread": "nvmf_tgt_poll_group_000", 00:24:39.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:39.201 "listen_address": { 00:24:39.201 "trtype": "TCP", 00:24:39.201 "adrfam": "IPv4", 00:24:39.201 "traddr": "10.0.0.2", 00:24:39.201 "trsvcid": "4420" 00:24:39.201 }, 00:24:39.201 "peer_address": { 00:24:39.201 "trtype": "TCP", 00:24:39.201 "adrfam": "IPv4", 00:24:39.201 "traddr": "10.0.0.1", 00:24:39.201 "trsvcid": "53490" 00:24:39.201 }, 00:24:39.201 "auth": { 00:24:39.201 "state": "completed", 00:24:39.201 "digest": "sha384", 00:24:39.201 "dhgroup": "ffdhe4096" 00:24:39.201 } 00:24:39.201 } 00:24:39.201 ]' 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:39.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:39.202 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:39.202 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:39.202 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:39.460 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:39.460 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:39.460 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.720 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:39.720 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:41.624 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.625 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.561 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.496 00:24:43.496 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:43.496 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:43.496 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:43.754 { 00:24:43.754 "cntlid": 81, 00:24:43.754 "qid": 0, 00:24:43.754 "state": "enabled", 00:24:43.754 "thread": "nvmf_tgt_poll_group_000", 00:24:43.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:43.754 "listen_address": { 00:24:43.754 "trtype": "TCP", 00:24:43.754 "adrfam": "IPv4", 00:24:43.754 "traddr": "10.0.0.2", 00:24:43.754 "trsvcid": "4420" 00:24:43.754 }, 00:24:43.754 "peer_address": { 00:24:43.754 "trtype": "TCP", 00:24:43.754 "adrfam": "IPv4", 00:24:43.754 "traddr": "10.0.0.1", 00:24:43.754 "trsvcid": "53528" 00:24:43.754 }, 00:24:43.754 "auth": { 00:24:43.754 "state": "completed", 00:24:43.754 "digest": "sha384", 00:24:43.754 "dhgroup": "ffdhe6144" 00:24:43.754 } 00:24:43.754 } 00:24:43.754 ]' 00:24:43.754 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.012 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.270 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:44.270 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:45.645 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.212 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.148 00:24:47.148 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:47.148 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:47.148 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.406 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:47.406 { 00:24:47.406 "cntlid": 83, 00:24:47.406 "qid": 0, 00:24:47.406 "state": "enabled", 00:24:47.406 "thread": "nvmf_tgt_poll_group_000", 00:24:47.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:47.406 "listen_address": { 00:24:47.406 "trtype": "TCP", 00:24:47.406 "adrfam": "IPv4", 00:24:47.406 "traddr": "10.0.0.2", 00:24:47.407 "trsvcid": "4420" 00:24:47.407 }, 00:24:47.407 "peer_address": { 00:24:47.407 "trtype": "TCP", 00:24:47.407 "adrfam": "IPv4", 00:24:47.407 "traddr": "10.0.0.1", 00:24:47.407 "trsvcid": "54392" 00:24:47.407 }, 00:24:47.407 "auth": { 00:24:47.407 "state": "completed", 00:24:47.407 "digest": "sha384", 00:24:47.407 "dhgroup": "ffdhe6144" 00:24:47.407 } 00:24:47.407 } 00:24:47.407 ]' 00:24:47.407 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:47.407 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:47.407 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:47.407 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:47.407 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:47.666 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:47.666 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:47.666 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.235 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:48.235 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:24:49.173 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.433 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.004 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.945 00:24:50.945 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:50.945 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:50.945 10:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:51.513 { 00:24:51.513 "cntlid": 85, 00:24:51.513 "qid": 0, 00:24:51.513 "state": "enabled", 00:24:51.513 "thread": "nvmf_tgt_poll_group_000", 00:24:51.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:51.513 "listen_address": { 00:24:51.513 "trtype": "TCP", 00:24:51.513 "adrfam": "IPv4", 00:24:51.513 "traddr": "10.0.0.2", 00:24:51.513 "trsvcid": "4420" 00:24:51.513 }, 00:24:51.513 "peer_address": { 00:24:51.513 "trtype": "TCP", 00:24:51.513 "adrfam": "IPv4", 00:24:51.513 "traddr": "10.0.0.1", 00:24:51.513 "trsvcid": "54408" 00:24:51.513 }, 00:24:51.513 "auth": { 00:24:51.513 "state": "completed", 00:24:51.513 "digest": "sha384", 00:24:51.513 "dhgroup": "ffdhe6144" 00:24:51.513 } 00:24:51.513 } 00:24:51.513 ]' 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:51.513 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:51.771 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:51.771 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:51.771 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:51.771 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:51.771 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:52.028 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:52.028 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:24:53.409 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:53.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.680 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:53.938 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:55.315 00:24:55.315 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:55.315 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:55.315 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:55.574 { 00:24:55.574 "cntlid": 87, 00:24:55.574 "qid": 0, 00:24:55.574 "state": "enabled", 00:24:55.574 "thread": "nvmf_tgt_poll_group_000", 00:24:55.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:24:55.574 "listen_address": { 00:24:55.574 "trtype": "TCP", 00:24:55.574 "adrfam": "IPv4", 00:24:55.574 "traddr": "10.0.0.2", 00:24:55.574 "trsvcid": "4420" 00:24:55.574 }, 00:24:55.574 "peer_address": { 00:24:55.574 "trtype": "TCP", 00:24:55.574 "adrfam": "IPv4", 00:24:55.574 "traddr": "10.0.0.1", 00:24:55.574 "trsvcid": "54448" 00:24:55.574 }, 00:24:55.574 "auth": { 00:24:55.574 "state": "completed", 00:24:55.574 "digest": "sha384", 00:24:55.574 "dhgroup": "ffdhe6144" 00:24:55.574 } 00:24:55.574 } 00:24:55.574 ]' 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:55.574 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:55.833 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:55.833 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:55.833 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:56.401 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:56.401 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:57.777 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:58.346 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.347 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.885 00:25:00.885 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:00.885 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:00.885 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:00.885 { 00:25:00.885 "cntlid": 89, 00:25:00.885 "qid": 0, 00:25:00.885 "state": "enabled", 00:25:00.885 "thread": "nvmf_tgt_poll_group_000", 00:25:00.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:00.885 "listen_address": { 00:25:00.885 "trtype": "TCP", 00:25:00.885 "adrfam": "IPv4", 00:25:00.885 "traddr": "10.0.0.2", 00:25:00.885 "trsvcid": "4420" 00:25:00.885 }, 00:25:00.885 "peer_address": { 00:25:00.885 "trtype": "TCP", 00:25:00.885 "adrfam": "IPv4", 00:25:00.885 "traddr": "10.0.0.1", 00:25:00.885 "trsvcid": "44048" 00:25:00.885 }, 00:25:00.885 "auth": { 00:25:00.885 "state": "completed", 00:25:00.885 "digest": "sha384", 00:25:00.885 "dhgroup": "ffdhe8192" 00:25:00.885 } 00:25:00.885 } 00:25:00.885 ]' 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:00.885 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:00.886 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:00.886 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:00.886 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:01.143 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:01.143 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:01.143 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:01.402 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:01.402 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.306 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.872 10:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.781 00:25:05.781 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:05.781 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:05.781 10:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:06.123 { 00:25:06.123 "cntlid": 91, 00:25:06.123 "qid": 0, 00:25:06.123 "state": "enabled", 00:25:06.123 "thread": "nvmf_tgt_poll_group_000", 00:25:06.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:06.123 "listen_address": { 00:25:06.123 "trtype": "TCP", 00:25:06.123 "adrfam": "IPv4", 00:25:06.123 "traddr": "10.0.0.2", 00:25:06.123 "trsvcid": "4420" 00:25:06.123 }, 00:25:06.123 "peer_address": { 00:25:06.123 "trtype": "TCP", 00:25:06.123 "adrfam": "IPv4", 00:25:06.123 "traddr": "10.0.0.1", 00:25:06.123 "trsvcid": "44074" 00:25:06.123 }, 00:25:06.123 "auth": { 00:25:06.123 "state": "completed", 00:25:06.123 "digest": "sha384", 00:25:06.123 "dhgroup": "ffdhe8192" 00:25:06.123 } 00:25:06.123 } 00:25:06.123 ]' 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:06.123 10:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:07.075 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:07.075 10:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.976 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.235 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.151 00:25:11.151 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:11.151 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:11.151 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:11.411 { 00:25:11.411 "cntlid": 93, 00:25:11.411 "qid": 0, 00:25:11.411 "state": "enabled", 00:25:11.411 "thread": "nvmf_tgt_poll_group_000", 00:25:11.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:11.411 "listen_address": { 00:25:11.411 "trtype": "TCP", 00:25:11.411 "adrfam": "IPv4", 00:25:11.411 "traddr": "10.0.0.2", 00:25:11.411 "trsvcid": "4420" 00:25:11.411 }, 00:25:11.411 "peer_address": { 00:25:11.411 "trtype": "TCP", 00:25:11.411 "adrfam": "IPv4", 00:25:11.411 "traddr": "10.0.0.1", 00:25:11.411 "trsvcid": "46250" 00:25:11.411 }, 00:25:11.411 "auth": { 00:25:11.411 "state": "completed", 00:25:11.411 "digest": "sha384", 00:25:11.411 "dhgroup": "ffdhe8192" 00:25:11.411 } 00:25:11.411 } 00:25:11.411 ]' 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:11.411 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:11.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:11.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:11.411 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:11.979 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:11.979 10:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:14.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:14.510 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:14.770 10:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:16.679 00:25:16.936 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:16.936 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:16.936 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:17.195 { 00:25:17.195 "cntlid": 95, 00:25:17.195 "qid": 0, 00:25:17.195 "state": "enabled", 00:25:17.195 "thread": "nvmf_tgt_poll_group_000", 00:25:17.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:17.195 "listen_address": { 00:25:17.195 "trtype": "TCP", 00:25:17.195 "adrfam": "IPv4", 00:25:17.195 "traddr": "10.0.0.2", 00:25:17.195 "trsvcid": "4420" 00:25:17.195 }, 00:25:17.195 "peer_address": { 00:25:17.195 "trtype": "TCP", 00:25:17.195 "adrfam": "IPv4", 00:25:17.195 "traddr": "10.0.0.1", 00:25:17.195 "trsvcid": "44590" 00:25:17.195 }, 00:25:17.195 "auth": { 00:25:17.195 "state": "completed", 00:25:17.195 "digest": "sha384", 00:25:17.195 "dhgroup": "ffdhe8192" 00:25:17.195 } 00:25:17.195 } 00:25:17.195 ]' 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:17.195 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:17.454 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:17.454 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:17.454 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.454 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.454 10:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.713 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:17.713 10:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:20.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:20.252 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.253 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.190 00:25:21.190 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:21.190 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:21.190 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:21.449 { 00:25:21.449 "cntlid": 97, 00:25:21.449 "qid": 0, 00:25:21.449 "state": "enabled", 00:25:21.449 "thread": "nvmf_tgt_poll_group_000", 00:25:21.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:21.449 "listen_address": { 00:25:21.449 "trtype": "TCP", 00:25:21.449 "adrfam": "IPv4", 00:25:21.449 "traddr": "10.0.0.2", 00:25:21.449 "trsvcid": "4420" 00:25:21.449 }, 00:25:21.449 "peer_address": { 00:25:21.449 "trtype": "TCP", 00:25:21.449 "adrfam": "IPv4", 00:25:21.449 "traddr": "10.0.0.1", 00:25:21.449 "trsvcid": "44624" 00:25:21.449 }, 00:25:21.449 "auth": { 00:25:21.449 "state": "completed", 00:25:21.449 "digest": "sha512", 00:25:21.449 "dhgroup": "null" 00:25:21.449 } 00:25:21.449 } 00:25:21.449 ]' 00:25:21.449 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:21.709 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:22.323 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:22.323 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:24.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:24.229 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:24.797 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:25:24.797 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:24.797 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:24.797 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.798 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.057 00:25:25.315 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:25.315 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.315 10:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:25.573 { 00:25:25.573 "cntlid": 99, 00:25:25.573 "qid": 0, 00:25:25.573 "state": "enabled", 00:25:25.573 "thread": "nvmf_tgt_poll_group_000", 00:25:25.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:25.573 "listen_address": { 00:25:25.573 "trtype": "TCP", 00:25:25.573 "adrfam": "IPv4", 00:25:25.573 "traddr": "10.0.0.2", 00:25:25.573 "trsvcid": "4420" 00:25:25.573 }, 00:25:25.573 "peer_address": { 00:25:25.573 "trtype": "TCP", 00:25:25.573 "adrfam": "IPv4", 00:25:25.573 "traddr": "10.0.0.1", 00:25:25.573 "trsvcid": "44650" 00:25:25.573 }, 00:25:25.573 "auth": { 00:25:25.573 "state": "completed", 00:25:25.573 "digest": "sha512", 00:25:25.573 "dhgroup": "null" 00:25:25.573 } 00:25:25.573 } 00:25:25.573 ]' 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:25.573 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:25.830 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:25.830 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:25.830 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:25.830 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:25.830 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:26.089 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:26.089 10:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:27.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:27.994 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.252 10:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.512 00:25:28.771 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:28.771 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:28.771 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:29.030 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.030 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:29.031 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.031 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.031 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.031 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:29.031 { 00:25:29.031 "cntlid": 101, 00:25:29.031 "qid": 0, 00:25:29.031 "state": "enabled", 00:25:29.031 "thread": "nvmf_tgt_poll_group_000", 00:25:29.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:29.031 "listen_address": { 00:25:29.031 "trtype": "TCP", 00:25:29.031 "adrfam": "IPv4", 00:25:29.031 "traddr": "10.0.0.2", 00:25:29.031 "trsvcid": "4420" 00:25:29.031 }, 00:25:29.031 "peer_address": { 00:25:29.031 "trtype": "TCP", 00:25:29.031 "adrfam": "IPv4", 00:25:29.031 "traddr": "10.0.0.1", 00:25:29.031 "trsvcid": "58674" 00:25:29.031 }, 00:25:29.031 "auth": { 00:25:29.031 "state": "completed", 00:25:29.031 "digest": "sha512", 00:25:29.031 "dhgroup": "null" 00:25:29.031 } 00:25:29.031 } 00:25:29.031 ]' 00:25:29.031 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:29.289 10:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:29.858 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:29.858 10:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:31.762 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:31.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:31.762 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:31.762 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.762 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.020 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.020 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:32.020 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:32.020 10:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:32.586 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:33.151 00:25:33.151 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:33.151 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:33.151 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.407 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:33.407 { 00:25:33.407 "cntlid": 103, 00:25:33.407 "qid": 0, 00:25:33.407 "state": "enabled", 00:25:33.407 "thread": "nvmf_tgt_poll_group_000", 00:25:33.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:33.407 "listen_address": { 00:25:33.407 "trtype": "TCP", 00:25:33.407 "adrfam": "IPv4", 00:25:33.407 "traddr": "10.0.0.2", 00:25:33.407 "trsvcid": "4420" 00:25:33.407 }, 00:25:33.408 "peer_address": { 00:25:33.408 "trtype": "TCP", 00:25:33.408 "adrfam": "IPv4", 00:25:33.408 "traddr": "10.0.0.1", 00:25:33.408 "trsvcid": "58696" 00:25:33.408 }, 00:25:33.408 "auth": { 00:25:33.408 "state": "completed", 00:25:33.408 "digest": "sha512", 00:25:33.408 "dhgroup": "null" 00:25:33.408 } 00:25:33.408 } 00:25:33.408 ]' 00:25:33.408 10:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:33.408 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:33.408 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:33.665 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:25:33.665 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:33.665 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:33.665 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:33.665 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:33.922 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:33.922 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:35.823 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.823 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.392 00:25:36.392 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:36.392 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:36.393 10:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:36.960 { 00:25:36.960 "cntlid": 105, 00:25:36.960 "qid": 0, 00:25:36.960 "state": "enabled", 00:25:36.960 "thread": "nvmf_tgt_poll_group_000", 00:25:36.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:36.960 "listen_address": { 00:25:36.960 "trtype": "TCP", 00:25:36.960 "adrfam": "IPv4", 00:25:36.960 "traddr": "10.0.0.2", 00:25:36.960 "trsvcid": "4420" 00:25:36.960 }, 00:25:36.960 "peer_address": { 00:25:36.960 "trtype": "TCP", 00:25:36.960 "adrfam": "IPv4", 00:25:36.960 "traddr": "10.0.0.1", 00:25:36.960 "trsvcid": "52106" 00:25:36.960 }, 00:25:36.960 "auth": { 00:25:36.960 "state": "completed", 00:25:36.960 "digest": "sha512", 00:25:36.960 "dhgroup": "ffdhe2048" 00:25:36.960 } 00:25:36.960 } 00:25:36.960 ]' 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.960 10:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:37.528 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:37.528 10:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:38.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.907 10:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:39.845 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:25:39.845 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.846 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.104 00:25:40.104 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:40.104 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:40.104 10:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:40.362 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.362 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:40.362 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.362 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:40.619 { 00:25:40.619 "cntlid": 107, 00:25:40.619 "qid": 0, 00:25:40.619 "state": "enabled", 00:25:40.619 "thread": "nvmf_tgt_poll_group_000", 00:25:40.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:40.619 "listen_address": { 00:25:40.619 "trtype": "TCP", 00:25:40.619 "adrfam": "IPv4", 00:25:40.619 "traddr": "10.0.0.2", 00:25:40.619 "trsvcid": "4420" 00:25:40.619 }, 00:25:40.619 "peer_address": { 00:25:40.619 "trtype": "TCP", 00:25:40.619 "adrfam": "IPv4", 00:25:40.619 "traddr": "10.0.0.1", 00:25:40.619 "trsvcid": "52136" 00:25:40.619 }, 00:25:40.619 "auth": { 00:25:40.619 "state": "completed", 00:25:40.619 "digest": "sha512", 00:25:40.619 "dhgroup": "ffdhe2048" 00:25:40.619 } 00:25:40.619 } 00:25:40.619 ]' 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.619 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.617 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:41.617 10:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:43.527 10:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.527 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.095 00:25:44.095 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:44.095 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:44.095 10:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:44.662 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.662 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:44.662 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.662 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:44.921 { 00:25:44.921 "cntlid": 109, 00:25:44.921 "qid": 0, 00:25:44.921 "state": "enabled", 00:25:44.921 "thread": "nvmf_tgt_poll_group_000", 00:25:44.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:44.921 "listen_address": { 00:25:44.921 "trtype": "TCP", 00:25:44.921 "adrfam": "IPv4", 00:25:44.921 "traddr": "10.0.0.2", 00:25:44.921 "trsvcid": "4420" 00:25:44.921 }, 00:25:44.921 "peer_address": { 00:25:44.921 "trtype": "TCP", 00:25:44.921 "adrfam": "IPv4", 00:25:44.921 "traddr": "10.0.0.1", 00:25:44.921 "trsvcid": "52156" 00:25:44.921 }, 00:25:44.921 "auth": { 00:25:44.921 "state": "completed", 00:25:44.921 "digest": "sha512", 00:25:44.921 "dhgroup": "ffdhe2048" 00:25:44.921 } 00:25:44.921 } 00:25:44.921 ]' 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:44.921 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:45.181 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:45.181 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:47.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:47.728 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:25:47.728 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:47.729 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:25:48.299 00:25:48.299 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:48.299 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:48.299 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:48.869 { 00:25:48.869 "cntlid": 111, 00:25:48.869 "qid": 0, 00:25:48.869 "state": "enabled", 00:25:48.869 "thread": "nvmf_tgt_poll_group_000", 00:25:48.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:48.869 "listen_address": { 00:25:48.869 "trtype": "TCP", 00:25:48.869 "adrfam": "IPv4", 00:25:48.869 "traddr": "10.0.0.2", 00:25:48.869 "trsvcid": "4420" 00:25:48.869 }, 00:25:48.869 "peer_address": { 00:25:48.869 "trtype": "TCP", 00:25:48.869 "adrfam": "IPv4", 00:25:48.869 "traddr": "10.0.0.1", 00:25:48.869 "trsvcid": "55404" 00:25:48.869 }, 00:25:48.869 "auth": { 00:25:48.869 "state": "completed", 00:25:48.869 "digest": "sha512", 00:25:48.869 "dhgroup": "ffdhe2048" 00:25:48.869 } 00:25:48.869 } 00:25:48.869 ]' 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:48.869 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:49.129 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:49.130 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:49.130 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:49.130 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:49.130 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:50.068 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:50.068 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:25:51.446 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:51.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:51.446 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:51.446 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.446 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.703 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.703 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.704 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:51.704 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.704 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.961 10:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.897 00:25:52.897 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:52.897 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:52.897 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:53.157 { 00:25:53.157 "cntlid": 113, 00:25:53.157 "qid": 0, 00:25:53.157 "state": "enabled", 00:25:53.157 "thread": "nvmf_tgt_poll_group_000", 00:25:53.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:53.157 "listen_address": { 00:25:53.157 "trtype": "TCP", 00:25:53.157 "adrfam": "IPv4", 00:25:53.157 "traddr": "10.0.0.2", 00:25:53.157 "trsvcid": "4420" 00:25:53.157 }, 00:25:53.157 "peer_address": { 00:25:53.157 "trtype": "TCP", 00:25:53.157 "adrfam": "IPv4", 00:25:53.157 "traddr": "10.0.0.1", 00:25:53.157 "trsvcid": "55432" 00:25:53.157 }, 00:25:53.157 "auth": { 00:25:53.157 "state": "completed", 00:25:53.157 "digest": "sha512", 00:25:53.157 "dhgroup": "ffdhe3072" 00:25:53.157 } 00:25:53.157 } 00:25:53.157 ]' 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:53.157 10:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:53.724 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:53.725 10:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:55.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.628 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.198 10:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.458 00:25:56.458 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:25:56.458 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:25:56.458 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:25:57.028 { 00:25:57.028 "cntlid": 115, 00:25:57.028 "qid": 0, 00:25:57.028 "state": "enabled", 00:25:57.028 "thread": "nvmf_tgt_poll_group_000", 00:25:57.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:25:57.028 "listen_address": { 00:25:57.028 "trtype": "TCP", 00:25:57.028 "adrfam": "IPv4", 00:25:57.028 "traddr": "10.0.0.2", 00:25:57.028 "trsvcid": "4420" 00:25:57.028 }, 00:25:57.028 "peer_address": { 00:25:57.028 "trtype": "TCP", 00:25:57.028 "adrfam": "IPv4", 00:25:57.028 "traddr": "10.0.0.1", 00:25:57.028 "trsvcid": "48950" 00:25:57.028 }, 00:25:57.028 "auth": { 00:25:57.028 "state": "completed", 00:25:57.028 "digest": "sha512", 00:25:57.028 "dhgroup": "ffdhe3072" 00:25:57.028 } 00:25:57.028 } 00:25:57.028 ]' 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:57.028 10:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:57.968 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:57.968 10:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:59.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:59.877 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:00.448 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.449 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.449 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.449 10:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.016 00:26:01.016 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:01.016 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:01.016 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:01.952 { 00:26:01.952 "cntlid": 117, 00:26:01.952 "qid": 0, 00:26:01.952 "state": "enabled", 00:26:01.952 "thread": "nvmf_tgt_poll_group_000", 00:26:01.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:01.952 "listen_address": { 00:26:01.952 "trtype": "TCP", 00:26:01.952 "adrfam": "IPv4", 00:26:01.952 "traddr": "10.0.0.2", 00:26:01.952 "trsvcid": "4420" 00:26:01.952 }, 00:26:01.952 "peer_address": { 00:26:01.952 "trtype": "TCP", 00:26:01.952 "adrfam": "IPv4", 00:26:01.952 "traddr": "10.0.0.1", 00:26:01.952 "trsvcid": "48992" 00:26:01.952 }, 00:26:01.952 "auth": { 00:26:01.952 "state": "completed", 00:26:01.952 "digest": "sha512", 00:26:01.952 "dhgroup": "ffdhe3072" 00:26:01.952 } 00:26:01.952 } 00:26:01.952 ]' 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:01.952 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:02.517 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:02.517 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:04.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.420 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:04.680 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:05.248 00:26:05.248 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:05.248 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:05.248 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:05.819 { 00:26:05.819 "cntlid": 119, 00:26:05.819 "qid": 0, 00:26:05.819 "state": "enabled", 00:26:05.819 "thread": "nvmf_tgt_poll_group_000", 00:26:05.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:05.819 "listen_address": { 00:26:05.819 "trtype": "TCP", 00:26:05.819 "adrfam": "IPv4", 00:26:05.819 "traddr": "10.0.0.2", 00:26:05.819 "trsvcid": "4420" 00:26:05.819 }, 00:26:05.819 "peer_address": { 00:26:05.819 "trtype": "TCP", 00:26:05.819 "adrfam": "IPv4", 00:26:05.819 "traddr": "10.0.0.1", 00:26:05.819 "trsvcid": "49030" 00:26:05.819 }, 00:26:05.819 "auth": { 00:26:05.819 "state": "completed", 00:26:05.819 "digest": "sha512", 00:26:05.819 "dhgroup": "ffdhe3072" 00:26:05.819 } 00:26:05.819 } 00:26:05.819 ]' 00:26:05.819 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:05.820 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:05.820 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:05.820 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:05.820 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:06.079 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:06.080 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:06.080 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:06.651 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:06.651 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:08.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:08.558 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.128 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.697 00:26:09.697 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:09.697 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:09.697 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:10.264 { 00:26:10.264 "cntlid": 121, 00:26:10.264 "qid": 0, 00:26:10.264 "state": "enabled", 00:26:10.264 "thread": "nvmf_tgt_poll_group_000", 00:26:10.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:10.264 "listen_address": { 00:26:10.264 "trtype": "TCP", 00:26:10.264 "adrfam": "IPv4", 00:26:10.264 "traddr": "10.0.0.2", 00:26:10.264 "trsvcid": "4420" 00:26:10.264 }, 00:26:10.264 "peer_address": { 00:26:10.264 "trtype": "TCP", 00:26:10.264 "adrfam": "IPv4", 00:26:10.264 "traddr": "10.0.0.1", 00:26:10.264 "trsvcid": "47460" 00:26:10.264 }, 00:26:10.264 "auth": { 00:26:10.264 "state": "completed", 00:26:10.264 "digest": "sha512", 00:26:10.264 "dhgroup": "ffdhe4096" 00:26:10.264 } 00:26:10.264 } 00:26:10.264 ]' 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:10.264 10:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:11.202 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:11.202 10:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:13.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.120 10:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.689 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.690 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.690 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.690 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.690 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.690 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.259 00:26:14.259 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:14.259 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:14.259 10:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:15.196 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:15.197 { 00:26:15.197 "cntlid": 123, 00:26:15.197 "qid": 0, 00:26:15.197 "state": "enabled", 00:26:15.197 "thread": "nvmf_tgt_poll_group_000", 00:26:15.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:15.197 "listen_address": { 00:26:15.197 "trtype": "TCP", 00:26:15.197 "adrfam": "IPv4", 00:26:15.197 "traddr": "10.0.0.2", 00:26:15.197 "trsvcid": "4420" 00:26:15.197 }, 00:26:15.197 "peer_address": { 00:26:15.197 "trtype": "TCP", 00:26:15.197 "adrfam": "IPv4", 00:26:15.197 "traddr": "10.0.0.1", 00:26:15.197 "trsvcid": "47494" 00:26:15.197 }, 00:26:15.197 "auth": { 00:26:15.197 "state": "completed", 00:26:15.197 "digest": "sha512", 00:26:15.197 "dhgroup": "ffdhe4096" 00:26:15.197 } 00:26:15.197 } 00:26:15.197 ]' 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:15.197 10:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:15.457 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:15.457 10:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:17.470 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:17.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:17.470 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:17.470 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.470 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:17.728 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:17.728 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.295 10:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.861 00:26:18.861 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:18.861 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:18.861 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:19.120 { 00:26:19.120 "cntlid": 125, 00:26:19.120 "qid": 0, 00:26:19.120 "state": "enabled", 00:26:19.120 "thread": "nvmf_tgt_poll_group_000", 00:26:19.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:19.120 "listen_address": { 00:26:19.120 "trtype": "TCP", 00:26:19.120 "adrfam": "IPv4", 00:26:19.120 "traddr": "10.0.0.2", 00:26:19.120 "trsvcid": "4420" 00:26:19.120 }, 00:26:19.120 "peer_address": { 00:26:19.120 "trtype": "TCP", 00:26:19.120 "adrfam": "IPv4", 00:26:19.120 "traddr": "10.0.0.1", 00:26:19.120 "trsvcid": "37002" 00:26:19.120 }, 00:26:19.120 "auth": { 00:26:19.120 "state": "completed", 00:26:19.120 "digest": "sha512", 00:26:19.120 "dhgroup": "ffdhe4096" 00:26:19.120 } 00:26:19.120 } 00:26:19.120 ]' 00:26:19.120 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:19.380 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:19.381 10:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:19.950 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:19.950 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:21.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.856 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:22.116 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:23.057 00:26:23.057 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:23.057 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:23.057 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:23.316 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.316 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:23.316 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.316 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.575 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.575 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:23.575 { 00:26:23.575 "cntlid": 127, 00:26:23.575 "qid": 0, 00:26:23.575 "state": "enabled", 00:26:23.575 "thread": "nvmf_tgt_poll_group_000", 00:26:23.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:23.575 "listen_address": { 00:26:23.575 "trtype": "TCP", 00:26:23.575 "adrfam": "IPv4", 00:26:23.575 "traddr": "10.0.0.2", 00:26:23.575 "trsvcid": "4420" 00:26:23.575 }, 00:26:23.575 "peer_address": { 00:26:23.575 "trtype": "TCP", 00:26:23.575 "adrfam": "IPv4", 00:26:23.575 "traddr": "10.0.0.1", 00:26:23.575 "trsvcid": "37028" 00:26:23.575 }, 00:26:23.575 "auth": { 00:26:23.575 "state": "completed", 00:26:23.575 "digest": "sha512", 00:26:23.575 "dhgroup": "ffdhe4096" 00:26:23.575 } 00:26:23.575 } 00:26:23.575 ]' 00:26:23.575 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:23.575 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:24.144 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:24.144 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:26.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.053 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.988 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.925 00:26:27.925 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:27.925 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:27.925 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:28.492 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.492 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:28.492 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.492 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:28.492 { 00:26:28.492 "cntlid": 129, 00:26:28.492 "qid": 0, 00:26:28.492 "state": "enabled", 00:26:28.492 "thread": "nvmf_tgt_poll_group_000", 00:26:28.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:28.492 "listen_address": { 00:26:28.492 "trtype": "TCP", 00:26:28.492 "adrfam": "IPv4", 00:26:28.492 "traddr": "10.0.0.2", 00:26:28.492 "trsvcid": "4420" 00:26:28.492 }, 00:26:28.492 "peer_address": { 00:26:28.492 "trtype": "TCP", 00:26:28.492 "adrfam": "IPv4", 00:26:28.492 "traddr": "10.0.0.1", 00:26:28.492 "trsvcid": "44336" 00:26:28.492 }, 00:26:28.492 "auth": { 00:26:28.492 "state": "completed", 00:26:28.492 "digest": "sha512", 00:26:28.492 "dhgroup": "ffdhe6144" 00:26:28.492 } 00:26:28.492 } 00:26:28.492 ]' 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:28.492 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:28.751 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:28.751 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:28.751 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:29.319 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:29.319 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:31.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.228 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.168 10:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.105 00:26:33.105 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:33.105 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:33.105 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:33.674 { 00:26:33.674 "cntlid": 131, 00:26:33.674 "qid": 0, 00:26:33.674 "state": "enabled", 00:26:33.674 "thread": "nvmf_tgt_poll_group_000", 00:26:33.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:33.674 "listen_address": { 00:26:33.674 "trtype": "TCP", 00:26:33.674 "adrfam": "IPv4", 00:26:33.674 "traddr": "10.0.0.2", 00:26:33.674 "trsvcid": "4420" 00:26:33.674 }, 00:26:33.674 "peer_address": { 00:26:33.674 "trtype": "TCP", 00:26:33.674 "adrfam": "IPv4", 00:26:33.674 "traddr": "10.0.0.1", 00:26:33.674 "trsvcid": "44364" 00:26:33.674 }, 00:26:33.674 "auth": { 00:26:33.674 "state": "completed", 00:26:33.674 "digest": "sha512", 00:26:33.674 "dhgroup": "ffdhe6144" 00:26:33.674 } 00:26:33.674 } 00:26:33.674 ]' 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:33.674 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:33.932 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:33.932 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:33.932 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:34.191 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:34.191 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:36.095 10:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.031 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.032 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.032 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.032 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.032 10:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.968 00:26:37.968 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:37.968 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:37.968 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:38.535 10:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:38.535 { 00:26:38.535 "cntlid": 133, 00:26:38.535 "qid": 0, 00:26:38.535 "state": "enabled", 00:26:38.535 "thread": "nvmf_tgt_poll_group_000", 00:26:38.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:38.535 "listen_address": { 00:26:38.535 "trtype": "TCP", 00:26:38.535 "adrfam": "IPv4", 00:26:38.535 "traddr": "10.0.0.2", 00:26:38.535 "trsvcid": "4420" 00:26:38.535 }, 00:26:38.535 "peer_address": { 00:26:38.535 "trtype": "TCP", 00:26:38.535 "adrfam": "IPv4", 00:26:38.535 "traddr": "10.0.0.1", 00:26:38.535 "trsvcid": "38358" 00:26:38.535 }, 00:26:38.535 "auth": { 00:26:38.535 "state": "completed", 00:26:38.535 "digest": "sha512", 00:26:38.535 "dhgroup": "ffdhe6144" 00:26:38.535 } 00:26:38.535 } 00:26:38.535 ]' 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:38.535 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:38.791 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:38.791 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:38.791 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:39.050 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:39.050 10:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:40.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:40.955 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:41.519 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:26:42.457 00:26:42.457 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:42.457 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:42.457 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:43.027 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.027 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:43.027 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.027 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:43.286 { 00:26:43.286 "cntlid": 135, 00:26:43.286 "qid": 0, 00:26:43.286 "state": "enabled", 00:26:43.286 "thread": "nvmf_tgt_poll_group_000", 00:26:43.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:43.286 "listen_address": { 00:26:43.286 "trtype": "TCP", 00:26:43.286 "adrfam": "IPv4", 00:26:43.286 "traddr": "10.0.0.2", 00:26:43.286 "trsvcid": "4420" 00:26:43.286 }, 00:26:43.286 "peer_address": { 00:26:43.286 "trtype": "TCP", 00:26:43.286 "adrfam": "IPv4", 00:26:43.286 "traddr": "10.0.0.1", 00:26:43.286 "trsvcid": "38392" 00:26:43.286 }, 00:26:43.286 "auth": { 00:26:43.286 "state": "completed", 00:26:43.286 "digest": "sha512", 00:26:43.286 "dhgroup": "ffdhe6144" 00:26:43.286 } 00:26:43.286 } 00:26:43.286 ]' 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:43.286 10:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:43.852 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:43.852 10:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:26:45.759 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:45.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:45.759 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:45.759 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.759 10:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.759 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.759 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.759 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:45.759 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:45.759 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.019 10:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.925 00:26:47.925 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:47.925 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:47.925 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:48.493 { 00:26:48.493 "cntlid": 137, 00:26:48.493 "qid": 0, 00:26:48.493 "state": "enabled", 00:26:48.493 "thread": "nvmf_tgt_poll_group_000", 00:26:48.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:48.493 "listen_address": { 00:26:48.493 "trtype": "TCP", 00:26:48.493 "adrfam": "IPv4", 00:26:48.493 "traddr": "10.0.0.2", 00:26:48.493 "trsvcid": "4420" 00:26:48.493 }, 00:26:48.493 "peer_address": { 00:26:48.493 "trtype": "TCP", 00:26:48.493 "adrfam": "IPv4", 00:26:48.493 "traddr": "10.0.0.1", 00:26:48.493 "trsvcid": "36622" 00:26:48.493 }, 00:26:48.493 "auth": { 00:26:48.493 "state": "completed", 00:26:48.493 "digest": "sha512", 00:26:48.493 "dhgroup": "ffdhe8192" 00:26:48.493 } 00:26:48.493 } 00:26:48.493 ]' 00:26:48.493 10:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:48.493 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:48.493 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:48.493 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:48.493 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:48.752 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:48.752 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:48.752 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:49.319 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:49.319 10:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:50.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:50.719 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.311 10:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.215 00:26:53.215 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:53.215 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:53.215 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:53.476 { 00:26:53.476 "cntlid": 139, 00:26:53.476 "qid": 0, 00:26:53.476 "state": "enabled", 00:26:53.476 "thread": "nvmf_tgt_poll_group_000", 00:26:53.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:53.476 "listen_address": { 00:26:53.476 "trtype": "TCP", 00:26:53.476 "adrfam": "IPv4", 00:26:53.476 "traddr": "10.0.0.2", 00:26:53.476 "trsvcid": "4420" 00:26:53.476 }, 00:26:53.476 "peer_address": { 00:26:53.476 "trtype": "TCP", 00:26:53.476 "adrfam": "IPv4", 00:26:53.476 "traddr": "10.0.0.1", 00:26:53.476 "trsvcid": "36638" 00:26:53.476 }, 00:26:53.476 "auth": { 00:26:53.476 "state": "completed", 00:26:53.476 "digest": "sha512", 00:26:53.476 "dhgroup": "ffdhe8192" 00:26:53.476 } 00:26:53.476 } 00:26:53.476 ]' 00:26:53.476 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:53.735 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:54.305 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:54.305 10:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: --dhchap-ctrl-secret DHHC-1:02:MGMzNzcxNjIwNjNmZWFmNjAxZTg1OTRhZDVkODNlMmZiOGQ0ZTgzYzRlNzI4ZWNhULtW8g==: 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:56.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.211 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.744 00:26:58.744 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:26:58.744 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:26:58.744 10:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:26:58.744 { 00:26:58.744 "cntlid": 141, 00:26:58.744 "qid": 0, 00:26:58.744 "state": "enabled", 00:26:58.744 "thread": "nvmf_tgt_poll_group_000", 00:26:58.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:26:58.744 "listen_address": { 00:26:58.744 "trtype": "TCP", 00:26:58.744 "adrfam": "IPv4", 00:26:58.744 "traddr": "10.0.0.2", 00:26:58.744 "trsvcid": "4420" 00:26:58.744 }, 00:26:58.744 "peer_address": { 00:26:58.744 "trtype": "TCP", 00:26:58.744 "adrfam": "IPv4", 00:26:58.744 "traddr": "10.0.0.1", 00:26:58.744 "trsvcid": "58812" 00:26:58.744 }, 00:26:58.744 "auth": { 00:26:58.744 "state": "completed", 00:26:58.744 "digest": "sha512", 00:26:58.744 "dhgroup": "ffdhe8192" 00:26:58.744 } 00:26:58.744 } 00:26:58.744 ]' 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:58.744 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:59.309 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:26:59.309 10:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:01:ZTA5MzM2ZTY1NzFhNDhmNDcwZWY5MmZkOTA2NWQ5NzaVWYPd: 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:01.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.207 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:01.772 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:03.673 00:27:03.673 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:27:03.673 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:27:03.673 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:27:03.932 { 00:27:03.932 "cntlid": 143, 00:27:03.932 "qid": 0, 00:27:03.932 "state": "enabled", 00:27:03.932 "thread": "nvmf_tgt_poll_group_000", 00:27:03.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:03.932 "listen_address": { 00:27:03.932 "trtype": "TCP", 00:27:03.932 "adrfam": "IPv4", 00:27:03.932 "traddr": "10.0.0.2", 00:27:03.932 "trsvcid": "4420" 00:27:03.932 }, 00:27:03.932 "peer_address": { 00:27:03.932 "trtype": "TCP", 00:27:03.932 "adrfam": "IPv4", 00:27:03.932 "traddr": "10.0.0.1", 00:27:03.932 "trsvcid": "58850" 00:27:03.932 }, 00:27:03.932 "auth": { 00:27:03.932 "state": "completed", 00:27:03.932 "digest": "sha512", 00:27:03.932 "dhgroup": "ffdhe8192" 00:27:03.932 } 00:27:03.932 } 00:27:03.932 ]' 00:27:03.932 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:27:04.190 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:04.190 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:27:04.191 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:04.191 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:27:04.191 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:04.191 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:04.191 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:04.449 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:04.449 10:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:06.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:06.987 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.245 10:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.147 00:27:09.147 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:27:09.147 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:27:09.147 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:27:09.406 { 00:27:09.406 "cntlid": 145, 00:27:09.406 "qid": 0, 00:27:09.406 "state": "enabled", 00:27:09.406 "thread": "nvmf_tgt_poll_group_000", 00:27:09.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:09.406 "listen_address": { 00:27:09.406 "trtype": "TCP", 00:27:09.406 "adrfam": "IPv4", 00:27:09.406 "traddr": "10.0.0.2", 00:27:09.406 "trsvcid": "4420" 00:27:09.406 }, 00:27:09.406 "peer_address": { 00:27:09.406 "trtype": "TCP", 00:27:09.406 "adrfam": "IPv4", 00:27:09.406 "traddr": "10.0.0.1", 00:27:09.406 "trsvcid": "39754" 00:27:09.406 }, 00:27:09.406 "auth": { 00:27:09.406 "state": "completed", 00:27:09.406 "digest": "sha512", 00:27:09.406 "dhgroup": "ffdhe8192" 00:27:09.406 } 00:27:09.406 } 00:27:09.406 ]' 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:09.406 10:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:27:09.406 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:09.406 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:27:09.665 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:09.665 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:09.665 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:10.232 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:27:10.232 10:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:ZWFjOWQxYTBjNjY5ZjJhNGZkZDI2ZGViYTFiOTAwOWQ2MzU1MmM3NDVhMDI4OGQ2FPULcw==: --dhchap-ctrl-secret DHHC-1:03:MmM2OWFhODExZmU1NjkxM2Y0ZjNjYThjZDIyNzU1MGU0OWQ4MjUzOWZlN2ZhYjg4ZWNmZWUzOTQ3N2Y0OTBhZruDWW4=: 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:12.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:27:12.135 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:27:14.667 request: 00:27:14.667 { 00:27:14.667 "name": "nvme0", 00:27:14.667 "trtype": "tcp", 00:27:14.667 "traddr": "10.0.0.2", 00:27:14.667 "adrfam": "ipv4", 00:27:14.667 "trsvcid": "4420", 00:27:14.667 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:14.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:14.667 "prchk_reftag": false, 00:27:14.667 "prchk_guard": false, 00:27:14.667 "hdgst": false, 00:27:14.667 "ddgst": false, 00:27:14.667 "dhchap_key": "key2", 00:27:14.667 "allow_unrecognized_csi": false, 00:27:14.667 "method": "bdev_nvme_attach_controller", 00:27:14.667 "req_id": 1 00:27:14.667 } 00:27:14.667 Got JSON-RPC error response 00:27:14.667 response: 00:27:14.667 { 00:27:14.667 "code": -5, 00:27:14.667 "message": "Input/output error" 00:27:14.667 } 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:14.668 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:16.042 request: 00:27:16.042 { 00:27:16.042 "name": "nvme0", 00:27:16.042 "trtype": "tcp", 00:27:16.042 "traddr": "10.0.0.2", 00:27:16.042 "adrfam": "ipv4", 00:27:16.042 "trsvcid": "4420", 00:27:16.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:16.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:16.043 "prchk_reftag": false, 00:27:16.043 "prchk_guard": false, 00:27:16.043 "hdgst": false, 00:27:16.043 "ddgst": false, 00:27:16.043 "dhchap_key": "key1", 00:27:16.043 "dhchap_ctrlr_key": "ckey2", 00:27:16.043 "allow_unrecognized_csi": false, 00:27:16.043 "method": "bdev_nvme_attach_controller", 00:27:16.043 "req_id": 1 00:27:16.043 } 00:27:16.043 Got JSON-RPC error response 00:27:16.043 response: 00:27:16.043 { 00:27:16.043 "code": -5, 00:27:16.043 "message": "Input/output error" 00:27:16.043 } 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.043 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.946 request: 00:27:17.946 { 00:27:17.946 "name": "nvme0", 00:27:17.946 "trtype": "tcp", 00:27:17.946 "traddr": "10.0.0.2", 00:27:17.946 "adrfam": "ipv4", 00:27:17.946 "trsvcid": "4420", 00:27:17.946 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:17.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:17.946 "prchk_reftag": false, 00:27:17.946 "prchk_guard": false, 00:27:17.946 "hdgst": false, 00:27:17.946 "ddgst": false, 00:27:17.946 "dhchap_key": "key1", 00:27:17.946 "dhchap_ctrlr_key": "ckey1", 00:27:17.946 "allow_unrecognized_csi": false, 00:27:17.946 "method": "bdev_nvme_attach_controller", 00:27:17.946 "req_id": 1 00:27:17.946 } 00:27:17.946 Got JSON-RPC error response 00:27:17.946 response: 00:27:17.946 { 00:27:17.946 "code": -5, 00:27:17.946 "message": "Input/output error" 00:27:17.946 } 00:27:17.946 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:17.946 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.946 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2086963 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2086963 ']' 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2086963 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2086963 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2086963' 00:27:17.947 killing process with pid 2086963 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2086963 00:27:17.947 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2086963 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2125746 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2125746 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2125746 ']' 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.204 10:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.151 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.151 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:27:19.151 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.151 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:19.151 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2125746 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2125746 ']' 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.409 10:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.666 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:27:19.666 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:27:19.666 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.666 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 null0 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IXn 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Hsm ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hsm 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dqB 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.R5S ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R5S 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.EVe 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZYJ ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZYJ 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uKh 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:19.925 10:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:23.213 nvme0n1 00:27:23.213 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:27:23.213 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:27:23.213 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.472 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:27:23.472 { 00:27:23.472 "cntlid": 1, 00:27:23.472 "qid": 0, 00:27:23.472 "state": "enabled", 00:27:23.472 "thread": "nvmf_tgt_poll_group_000", 00:27:23.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:23.472 "listen_address": { 00:27:23.472 "trtype": "TCP", 00:27:23.472 "adrfam": "IPv4", 00:27:23.472 "traddr": "10.0.0.2", 00:27:23.472 "trsvcid": "4420" 00:27:23.472 }, 00:27:23.472 "peer_address": { 00:27:23.472 "trtype": "TCP", 00:27:23.472 "adrfam": "IPv4", 00:27:23.472 "traddr": "10.0.0.1", 00:27:23.473 "trsvcid": "46250" 00:27:23.473 }, 00:27:23.473 "auth": { 00:27:23.473 "state": "completed", 00:27:23.473 "digest": "sha512", 00:27:23.473 "dhgroup": "ffdhe8192" 00:27:23.473 } 00:27:23.473 } 00:27:23.473 ]' 00:27:23.473 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:27:23.473 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:23.473 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:27:23.473 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:23.473 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:27:23.733 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:23.733 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:23.733 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:24.302 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:24.302 10:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:26.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:27:26.202 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:26.460 10:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:26.718 request: 00:27:26.718 { 00:27:26.718 "name": "nvme0", 00:27:26.718 "trtype": "tcp", 00:27:26.718 "traddr": "10.0.0.2", 00:27:26.718 "adrfam": "ipv4", 00:27:26.718 "trsvcid": "4420", 00:27:26.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:26.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:26.718 "prchk_reftag": false, 00:27:26.718 "prchk_guard": false, 00:27:26.718 "hdgst": false, 00:27:26.718 "ddgst": false, 00:27:26.718 "dhchap_key": "key3", 00:27:26.718 "allow_unrecognized_csi": false, 00:27:26.718 "method": "bdev_nvme_attach_controller", 00:27:26.718 "req_id": 1 00:27:26.718 } 00:27:26.718 Got JSON-RPC error response 00:27:26.718 response: 00:27:26.718 { 00:27:26.718 "code": -5, 00:27:26.718 "message": "Input/output error" 00:27:26.718 } 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:27:26.718 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:27.293 10:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:27:27.860 request: 00:27:27.860 { 00:27:27.860 "name": "nvme0", 00:27:27.860 "trtype": "tcp", 00:27:27.860 "traddr": "10.0.0.2", 00:27:27.860 "adrfam": "ipv4", 00:27:27.860 "trsvcid": "4420", 00:27:27.860 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:27.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:27.860 "prchk_reftag": false, 00:27:27.860 "prchk_guard": false, 00:27:27.860 "hdgst": false, 00:27:27.860 "ddgst": false, 00:27:27.860 "dhchap_key": "key3", 00:27:27.860 "allow_unrecognized_csi": false, 00:27:27.860 "method": "bdev_nvme_attach_controller", 00:27:27.860 "req_id": 1 00:27:27.860 } 00:27:27.860 Got JSON-RPC error response 00:27:27.860 response: 00:27:27.860 { 00:27:27.860 "code": -5, 00:27:27.860 "message": "Input/output error" 00:27:27.860 } 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:27.860 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:28.119 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:28.120 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:28.120 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:29.554 request: 00:27:29.554 { 00:27:29.554 "name": "nvme0", 00:27:29.554 "trtype": "tcp", 00:27:29.554 "traddr": "10.0.0.2", 00:27:29.554 "adrfam": "ipv4", 00:27:29.554 "trsvcid": "4420", 00:27:29.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:29.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:29.554 "prchk_reftag": false, 00:27:29.554 "prchk_guard": false, 00:27:29.554 "hdgst": false, 00:27:29.554 "ddgst": false, 00:27:29.554 "dhchap_key": "key0", 00:27:29.554 "dhchap_ctrlr_key": "key1", 00:27:29.554 "allow_unrecognized_csi": false, 00:27:29.554 "method": "bdev_nvme_attach_controller", 00:27:29.554 "req_id": 1 00:27:29.554 } 00:27:29.554 Got JSON-RPC error response 00:27:29.554 response: 00:27:29.554 { 00:27:29.554 "code": -5, 00:27:29.554 "message": "Input/output error" 00:27:29.554 } 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:27:29.554 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:27:29.813 nvme0n1 00:27:29.813 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:27:29.813 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:27:29.813 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:30.384 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.384 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:30.384 10:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:27:30.655 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:27:30.656 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:27:33.954 nvme0n1 00:27:33.954 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:27:33.954 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:27:33.954 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:27:34.524 10:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:35.093 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.093 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:35.093 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: --dhchap-ctrl-secret DHHC-1:03:YjE4NWUxOWI0MDk0YmE3YjQ1YmFkNmVlZjk3OWYwMWJkNGFlOTdlMjhjMTg2OTgwMDRlM2M5ZTJhMjZiOWYxNkIczSA=: 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:37.013 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:37.632 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:27:37.633 10:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:27:39.535 request: 00:27:39.535 { 00:27:39.535 "name": "nvme0", 00:27:39.535 "trtype": "tcp", 00:27:39.535 "traddr": "10.0.0.2", 00:27:39.535 "adrfam": "ipv4", 00:27:39.535 "trsvcid": "4420", 00:27:39.535 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:27:39.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:27:39.535 "prchk_reftag": false, 00:27:39.535 "prchk_guard": false, 00:27:39.535 "hdgst": false, 00:27:39.535 "ddgst": false, 00:27:39.535 "dhchap_key": "key1", 00:27:39.535 "allow_unrecognized_csi": false, 00:27:39.535 "method": "bdev_nvme_attach_controller", 00:27:39.535 "req_id": 1 00:27:39.535 } 00:27:39.535 Got JSON-RPC error response 00:27:39.535 response: 00:27:39.535 { 00:27:39.535 "code": -5, 00:27:39.535 "message": "Input/output error" 00:27:39.535 } 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:39.535 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:42.068 nvme0n1 00:27:42.068 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:27:42.068 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:27:42.068 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:42.328 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.328 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:42.328 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:27:42.898 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:27:43.158 nvme0n1 00:27:43.158 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:27:43.158 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:27:43.158 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:43.726 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.726 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:43.726 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: '' 2s 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: ]] 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTc3MWZlMTU3NDJjZGQ1NmYwMjU4M2FkYTg1NzJjZjVMLspQ: 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:27:44.295 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: 2s 00:27:46.832 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: ]] 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmVmMThhMTVjNWRlNmVmZmE0OTg4YjI2MjNhYTA1MDEwODcxZDY5N2Q0MjkzZTYyq8j5Ew==: 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:27:46.833 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:27:48.734 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:48.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:48.734 10:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:50.638 nvme0n1 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:50.638 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:52.545 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:27:52.545 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:27:52.545 10:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:52.545 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.545 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:52.545 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.545 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.804 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.804 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:27:52.804 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:27:53.063 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:27:53.063 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:27:53.063 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.630 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:27:53.631 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:27:55.532 request: 00:27:55.532 { 00:27:55.532 "name": "nvme0", 00:27:55.532 "dhchap_key": "key1", 00:27:55.532 "dhchap_ctrlr_key": "key3", 00:27:55.532 "method": "bdev_nvme_set_keys", 00:27:55.532 "req_id": 1 00:27:55.532 } 00:27:55.532 Got JSON-RPC error response 00:27:55.532 response: 00:27:55.532 { 00:27:55.532 "code": -13, 00:27:55.532 "message": "Permission denied" 00:27:55.532 } 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:27:55.532 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:56.099 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:27:56.099 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:27:57.033 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:27:57.033 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:27:57.033 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:57.292 10:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:00.581 nvme0n1 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:28:00.581 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:28:03.111 request: 00:28:03.111 { 00:28:03.111 "name": "nvme0", 00:28:03.111 "dhchap_key": "key2", 00:28:03.111 "dhchap_ctrlr_key": "key0", 00:28:03.111 "method": "bdev_nvme_set_keys", 00:28:03.111 "req_id": 1 00:28:03.111 } 00:28:03.111 Got JSON-RPC error response 00:28:03.111 response: 00:28:03.111 { 00:28:03.111 "code": -13, 00:28:03.111 "message": "Permission denied" 00:28:03.111 } 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:28:03.111 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:03.369 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:28:03.369 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:28:04.305 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:28:04.305 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:28:04.305 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2086998 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2086998 ']' 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2086998 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2086998 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2086998' 00:28:05.242 killing process with pid 2086998 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2086998 00:28:05.242 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2086998 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.826 rmmod nvme_tcp 00:28:05.826 rmmod nvme_fabrics 00:28:05.826 rmmod nvme_keyring 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2125746 ']' 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2125746 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2125746 ']' 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2125746 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125746 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125746' 00:28:05.826 killing process with pid 2125746 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2125746 00:28:05.826 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2125746 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.395 10:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IXn /tmp/spdk.key-sha256.dqB /tmp/spdk.key-sha384.EVe /tmp/spdk.key-sha512.uKh /tmp/spdk.key-sha512.Hsm /tmp/spdk.key-sha384.R5S /tmp/spdk.key-sha256.ZYJ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:28:08.362 00:28:08.362 real 6m28.060s 00:28:08.362 user 15m7.846s 00:28:08.362 sys 0m42.691s 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.362 ************************************ 00:28:08.362 END TEST nvmf_auth_target 00:28:08.362 ************************************ 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:08.362 10:38:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:08.362 ************************************ 00:28:08.362 START TEST nvmf_bdevio_no_huge 00:28:08.362 ************************************ 00:28:08.362 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:28:08.620 * Looking for test storage... 00:28:08.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:08.620 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:08.620 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:28:08.620 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:28:08.878 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:08.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.879 --rc genhtml_branch_coverage=1 00:28:08.879 --rc genhtml_function_coverage=1 00:28:08.879 --rc genhtml_legend=1 00:28:08.879 --rc geninfo_all_blocks=1 00:28:08.879 --rc geninfo_unexecuted_blocks=1 00:28:08.879 00:28:08.879 ' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:08.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.879 --rc genhtml_branch_coverage=1 00:28:08.879 --rc genhtml_function_coverage=1 00:28:08.879 --rc genhtml_legend=1 00:28:08.879 --rc geninfo_all_blocks=1 00:28:08.879 --rc geninfo_unexecuted_blocks=1 00:28:08.879 00:28:08.879 ' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:08.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.879 --rc genhtml_branch_coverage=1 00:28:08.879 --rc genhtml_function_coverage=1 00:28:08.879 --rc genhtml_legend=1 00:28:08.879 --rc geninfo_all_blocks=1 00:28:08.879 --rc geninfo_unexecuted_blocks=1 00:28:08.879 00:28:08.879 ' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:08.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.879 --rc genhtml_branch_coverage=1 00:28:08.879 --rc genhtml_function_coverage=1 00:28:08.879 --rc genhtml_legend=1 00:28:08.879 --rc geninfo_all_blocks=1 00:28:08.879 --rc geninfo_unexecuted_blocks=1 00:28:08.879 00:28:08.879 ' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:08.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.879 10:38:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:12.216 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:12.216 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:12.217 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:12.217 Found net devices under 0000:84:00.0: cvl_0_0 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:12.217 Found net devices under 0000:84:00.1: cvl_0_1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:28:12.217 00:28:12.217 --- 10.0.0.2 ping statistics --- 00:28:12.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.217 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:12.217 00:28:12.217 --- 10.0.0.1 ping statistics --- 00:28:12.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.217 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2133262 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2133262 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2133262 ']' 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.217 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.217 [2024-12-09 10:38:56.833219] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:12.217 [2024-12-09 10:38:56.833326] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:28:12.476 [2024-12-09 10:38:56.929581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.476 [2024-12-09 10:38:57.000644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.476 [2024-12-09 10:38:57.000709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.476 [2024-12-09 10:38:57.000743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.476 [2024-12-09 10:38:57.000771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.476 [2024-12-09 10:38:57.000783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.476 [2024-12-09 10:38:57.002079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.476 [2024-12-09 10:38:57.002132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:12.476 [2024-12-09 10:38:57.002187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:12.476 [2024-12-09 10:38:57.002191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 [2024-12-09 10:38:57.259509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 Malloc0 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:12.734 [2024-12-09 10:38:57.298185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.734 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:12.735 { 00:28:12.735 "params": { 00:28:12.735 "name": "Nvme$subsystem", 00:28:12.735 "trtype": "$TEST_TRANSPORT", 00:28:12.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:12.735 "adrfam": "ipv4", 00:28:12.735 "trsvcid": "$NVMF_PORT", 00:28:12.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:12.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:12.735 "hdgst": ${hdgst:-false}, 00:28:12.735 "ddgst": ${ddgst:-false} 00:28:12.735 }, 00:28:12.735 "method": "bdev_nvme_attach_controller" 00:28:12.735 } 00:28:12.735 EOF 00:28:12.735 )") 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:28:12.735 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:12.735 "params": { 00:28:12.735 "name": "Nvme1", 00:28:12.735 "trtype": "tcp", 00:28:12.735 "traddr": "10.0.0.2", 00:28:12.735 "adrfam": "ipv4", 00:28:12.735 "trsvcid": "4420", 00:28:12.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:12.735 "hdgst": false, 00:28:12.735 "ddgst": false 00:28:12.735 }, 00:28:12.735 "method": "bdev_nvme_attach_controller" 00:28:12.735 }' 00:28:12.735 [2024-12-09 10:38:57.355949] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:12.735 [2024-12-09 10:38:57.356158] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2133407 ] 00:28:12.994 [2024-12-09 10:38:57.460590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:12.994 [2024-12-09 10:38:57.528637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.994 [2024-12-09 10:38:57.528684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.994 [2024-12-09 10:38:57.528687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.252 I/O targets: 00:28:13.252 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:28:13.252 00:28:13.252 00:28:13.252 CUnit - A unit testing framework for C - Version 2.1-3 00:28:13.252 http://cunit.sourceforge.net/ 00:28:13.252 00:28:13.252 00:28:13.252 Suite: bdevio tests on: Nvme1n1 00:28:13.252 Test: blockdev write read block ...passed 00:28:13.252 Test: blockdev write zeroes read block ...passed 00:28:13.252 Test: blockdev write zeroes read no split ...passed 00:28:13.252 Test: blockdev write zeroes read split ...passed 00:28:13.252 Test: blockdev write zeroes read split partial ...passed 00:28:13.252 Test: blockdev reset ...[2024-12-09 10:38:57.844574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:13.252 [2024-12-09 10:38:57.844688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132e660 (9): Bad file descriptor 00:28:13.252 [2024-12-09 10:38:57.861885] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:28:13.252 passed 00:28:13.252 Test: blockdev write read 8 blocks ...passed 00:28:13.253 Test: blockdev write read size > 128k ...passed 00:28:13.253 Test: blockdev write read invalid size ...passed 00:28:13.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:13.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:13.511 Test: blockdev write read max offset ...passed 00:28:13.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:13.511 Test: blockdev writev readv 8 blocks ...passed 00:28:13.511 Test: blockdev writev readv 30 x 1block ...passed 00:28:13.511 Test: blockdev writev readv block ...passed 00:28:13.511 Test: blockdev writev readv size > 128k ...passed 00:28:13.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:13.511 Test: blockdev comparev and writev ...[2024-12-09 10:38:58.115347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.115385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.115430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.115756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.115782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.115806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.115823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.116143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.116174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.116198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.116215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.116634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:13.511 [2024-12-09 10:38:58.116657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:28:13.511 [2024-12-09 10:38:58.116673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:13.511 passed 00:28:13.770 Test: blockdev nvme passthru rw ...passed 00:28:13.770 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:38:58.199045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:13.770 [2024-12-09 10:38:58.199078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:13.770 [2024-12-09 10:38:58.199232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:13.770 [2024-12-09 10:38:58.199257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:13.770 [2024-12-09 10:38:58.199406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:13.770 [2024-12-09 10:38:58.199431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:13.770 [2024-12-09 10:38:58.199572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:13.770 [2024-12-09 10:38:58.199596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:13.770 passed 00:28:13.770 Test: blockdev nvme admin passthru ...passed 00:28:13.770 Test: blockdev copy ...passed 00:28:13.770 00:28:13.770 Run Summary: Type Total Ran Passed Failed Inactive 00:28:13.770 suites 1 1 n/a 0 0 00:28:13.770 tests 23 23 23 0 0 00:28:13.770 asserts 152 152 152 0 n/a 00:28:13.770 00:28:13.770 Elapsed time = 1.070 seconds 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.028 rmmod nvme_tcp 00:28:14.028 rmmod nvme_fabrics 00:28:14.028 rmmod nvme_keyring 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2133262 ']' 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2133262 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2133262 ']' 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2133262 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.028 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2133262 00:28:14.286 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:28:14.286 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:28:14.286 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2133262' 00:28:14.286 killing process with pid 2133262 00:28:14.286 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2133262 00:28:14.286 10:38:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2133262 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.547 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.093 00:28:17.093 real 0m8.158s 00:28:17.093 user 0m11.450s 00:28:17.093 sys 0m3.784s 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:28:17.093 ************************************ 00:28:17.093 END TEST nvmf_bdevio_no_huge 00:28:17.093 ************************************ 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:17.093 ************************************ 00:28:17.093 START TEST nvmf_tls 00:28:17.093 ************************************ 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:28:17.093 * Looking for test storage... 00:28:17.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.093 --rc genhtml_branch_coverage=1 00:28:17.093 --rc genhtml_function_coverage=1 00:28:17.093 --rc genhtml_legend=1 00:28:17.093 --rc geninfo_all_blocks=1 00:28:17.093 --rc geninfo_unexecuted_blocks=1 00:28:17.093 00:28:17.093 ' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.093 --rc genhtml_branch_coverage=1 00:28:17.093 --rc genhtml_function_coverage=1 00:28:17.093 --rc genhtml_legend=1 00:28:17.093 --rc geninfo_all_blocks=1 00:28:17.093 --rc geninfo_unexecuted_blocks=1 00:28:17.093 00:28:17.093 ' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.093 --rc genhtml_branch_coverage=1 00:28:17.093 --rc genhtml_function_coverage=1 00:28:17.093 --rc genhtml_legend=1 00:28:17.093 --rc geninfo_all_blocks=1 00:28:17.093 --rc geninfo_unexecuted_blocks=1 00:28:17.093 00:28:17.093 ' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.093 --rc genhtml_branch_coverage=1 00:28:17.093 --rc genhtml_function_coverage=1 00:28:17.093 --rc genhtml_legend=1 00:28:17.093 --rc geninfo_all_blocks=1 00:28:17.093 --rc geninfo_unexecuted_blocks=1 00:28:17.093 00:28:17.093 ' 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:17.093 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.094 10:39:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:20.384 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:20.384 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:20.384 Found net devices under 0000:84:00.0: cvl_0_0 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:20.384 Found net devices under 0000:84:00.1: cvl_0_1 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.384 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:28:20.385 00:28:20.385 --- 10.0.0.2 ping statistics --- 00:28:20.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.385 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:28:20.385 00:28:20.385 --- 10.0.0.1 ping statistics --- 00:28:20.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.385 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2135737 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2135737 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2135737 ']' 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.385 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.385 [2024-12-09 10:39:04.745372] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:20.385 [2024-12-09 10:39:04.745552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.385 [2024-12-09 10:39:04.867744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.385 [2024-12-09 10:39:04.931570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.385 [2024-12-09 10:39:04.931633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.385 [2024-12-09 10:39:04.931650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.385 [2024-12-09 10:39:04.931664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.385 [2024-12-09 10:39:04.931675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.385 [2024-12-09 10:39:04.932354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.642 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.642 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:20.642 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.642 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.642 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:20.643 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.643 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:28:20.643 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:28:20.900 true 00:28:20.900 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:20.900 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:28:21.159 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:28:21.159 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:28:21.159 10:39:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:22.097 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:22.097 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:28:22.354 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:28:22.354 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:28:22.354 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:28:22.612 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:22.612 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:28:23.545 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:28:23.545 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:28:23.545 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:23.545 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:28:23.803 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:28:23.803 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:28:23.803 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:28:24.371 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:24.371 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:28:24.937 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:28:24.937 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:28:24.937 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:28:25.503 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:28:25.503 10:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.gmrKDoyOIe 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.XHrmc1PpF6 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gmrKDoyOIe 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.XHrmc1PpF6 00:28:26.074 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:28:26.638 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:28:27.207 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.gmrKDoyOIe 00:28:27.207 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gmrKDoyOIe 00:28:27.207 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:27.789 [2024-12-09 10:39:12.389864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.789 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:28:28.355 10:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:28:28.614 [2024-12-09 10:39:13.209233] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:28.614 [2024-12-09 10:39:13.209769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.614 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:28:29.184 malloc0 00:28:29.184 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:29.753 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gmrKDoyOIe 00:28:30.341 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:28:30.911 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gmrKDoyOIe 00:28:43.136 Initializing NVMe Controllers 00:28:43.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.136 Initialization complete. Launching workers. 00:28:43.136 ======================================================== 00:28:43.136 Latency(us) 00:28:43.136 Device Information : IOPS MiB/s Average min max 00:28:43.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3565.17 13.93 17964.44 2180.33 29769.02 00:28:43.136 ======================================================== 00:28:43.136 Total : 3565.17 13.93 17964.44 2180.33 29769.02 00:28:43.136 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gmrKDoyOIe 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gmrKDoyOIe 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2138659 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2138659 /var/tmp/bdevperf.sock 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2138659 ']' 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.136 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:43.136 [2024-12-09 10:39:25.788953] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:43.136 [2024-12-09 10:39:25.789050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138659 ] 00:28:43.136 [2024-12-09 10:39:25.920593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.136 [2024-12-09 10:39:26.035141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.136 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.136 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:43.136 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gmrKDoyOIe 00:28:43.137 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:43.137 [2024-12-09 10:39:26.962224] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:43.137 TLSTESTn1 00:28:43.137 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:28:43.137 Running I/O for 10 seconds... 00:28:44.650 1445.00 IOPS, 5.64 MiB/s [2024-12-09T09:39:30.250Z] 1494.00 IOPS, 5.84 MiB/s [2024-12-09T09:39:31.630Z] 1493.00 IOPS, 5.83 MiB/s [2024-12-09T09:39:32.571Z] 1526.25 IOPS, 5.96 MiB/s [2024-12-09T09:39:33.505Z] 1530.20 IOPS, 5.98 MiB/s [2024-12-09T09:39:34.443Z] 1579.67 IOPS, 6.17 MiB/s [2024-12-09T09:39:35.486Z] 1740.57 IOPS, 6.80 MiB/s [2024-12-09T09:39:36.424Z] 1800.50 IOPS, 7.03 MiB/s [2024-12-09T09:39:37.375Z] 1857.56 IOPS, 7.26 MiB/s [2024-12-09T09:39:37.375Z] 1921.10 IOPS, 7.50 MiB/s 00:28:52.721 Latency(us) 00:28:52.721 [2024-12-09T09:39:37.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.721 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:52.721 Verification LBA range: start 0x0 length 0x2000 00:28:52.721 TLSTESTn1 : 10.02 1929.05 7.54 0.00 0.00 66223.22 8446.86 69516.71 00:28:52.721 [2024-12-09T09:39:37.375Z] =================================================================================================================== 00:28:52.721 [2024-12-09T09:39:37.375Z] Total : 1929.05 7.54 0.00 0.00 66223.22 8446.86 69516.71 00:28:52.721 { 00:28:52.722 "results": [ 00:28:52.722 { 00:28:52.722 "job": "TLSTESTn1", 00:28:52.722 "core_mask": "0x4", 00:28:52.722 "workload": "verify", 00:28:52.722 "status": "finished", 00:28:52.722 "verify_range": { 00:28:52.722 "start": 0, 00:28:52.722 "length": 8192 00:28:52.722 }, 00:28:52.722 "queue_depth": 128, 00:28:52.722 "io_size": 4096, 00:28:52.722 "runtime": 10.024607, 00:28:52.722 "iops": 1929.0531788428216, 00:28:52.722 "mibps": 7.535363979854772, 00:28:52.722 "io_failed": 0, 00:28:52.722 "io_timeout": 0, 00:28:52.722 "avg_latency_us": 66223.2201312327, 00:28:52.722 "min_latency_us": 8446.862222222222, 00:28:52.722 "max_latency_us": 69516.70518518519 00:28:52.722 } 00:28:52.722 ], 00:28:52.722 "core_count": 1 00:28:52.722 } 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2138659 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2138659 ']' 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2138659 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138659 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138659' 00:28:52.722 killing process with pid 2138659 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2138659 00:28:52.722 Received shutdown signal, test time was about 10.000000 seconds 00:28:52.722 00:28:52.722 Latency(us) 00:28:52.722 [2024-12-09T09:39:37.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.722 [2024-12-09T09:39:37.376Z] =================================================================================================================== 00:28:52.722 [2024-12-09T09:39:37.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.722 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2138659 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XHrmc1PpF6 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XHrmc1PpF6 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XHrmc1PpF6 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XHrmc1PpF6 00:28:52.982 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2139979 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2139979 /var/tmp/bdevperf.sock 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2139979 ']' 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.242 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:53.242 [2024-12-09 10:39:37.740315] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:53.242 [2024-12-09 10:39:37.740494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139979 ] 00:28:53.242 [2024-12-09 10:39:37.894441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.501 [2024-12-09 10:39:38.000682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.760 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.760 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:53.761 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XHrmc1PpF6 00:28:54.020 10:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:54.589 [2024-12-09 10:39:38.973468] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:54.589 [2024-12-09 10:39:38.981942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:54.589 [2024-12-09 10:39:38.982933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430580 (107): Transport endpoint is not connected 00:28:54.589 [2024-12-09 10:39:38.983921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430580 (9): Bad file descriptor 00:28:54.589 [2024-12-09 10:39:38.984920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:28:54.589 [2024-12-09 10:39:38.984944] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:54.589 [2024-12-09 10:39:38.984962] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:28:54.589 [2024-12-09 10:39:38.984994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:28:54.589 request: 00:28:54.589 { 00:28:54.589 "name": "TLSTEST", 00:28:54.589 "trtype": "tcp", 00:28:54.589 "traddr": "10.0.0.2", 00:28:54.589 "adrfam": "ipv4", 00:28:54.589 "trsvcid": "4420", 00:28:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.589 "prchk_reftag": false, 00:28:54.589 "prchk_guard": false, 00:28:54.589 "hdgst": false, 00:28:54.589 "ddgst": false, 00:28:54.589 "psk": "key0", 00:28:54.589 "allow_unrecognized_csi": false, 00:28:54.589 "method": "bdev_nvme_attach_controller", 00:28:54.589 "req_id": 1 00:28:54.589 } 00:28:54.589 Got JSON-RPC error response 00:28:54.589 response: 00:28:54.589 { 00:28:54.589 "code": -5, 00:28:54.589 "message": "Input/output error" 00:28:54.589 } 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2139979 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2139979 ']' 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2139979 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139979 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139979' 00:28:54.589 killing process with pid 2139979 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2139979 00:28:54.589 Received shutdown signal, test time was about 10.000000 seconds 00:28:54.589 00:28:54.589 Latency(us) 00:28:54.589 [2024-12-09T09:39:39.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.589 [2024-12-09T09:39:39.243Z] =================================================================================================================== 00:28:54.589 [2024-12-09T09:39:39.243Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:54.589 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2139979 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gmrKDoyOIe 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gmrKDoyOIe 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gmrKDoyOIe 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gmrKDoyOIe 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2140130 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2140130 /var/tmp/bdevperf.sock 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2140130 ']' 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:54.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.849 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:54.849 [2024-12-09 10:39:39.477537] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:54.849 [2024-12-09 10:39:39.477715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140130 ] 00:28:55.109 [2024-12-09 10:39:39.602864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.109 [2024-12-09 10:39:39.709824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.370 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.370 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:55.370 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gmrKDoyOIe 00:28:55.940 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:28:56.200 [2024-12-09 10:39:40.755651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:56.200 [2024-12-09 10:39:40.763886] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:28:56.200 [2024-12-09 10:39:40.763921] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:28:56.200 [2024-12-09 10:39:40.763975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:56.200 [2024-12-09 10:39:40.764168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e21580 (107): Transport endpoint is not connected 00:28:56.200 [2024-12-09 10:39:40.765168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e21580 (9): Bad file descriptor 00:28:56.200 [2024-12-09 10:39:40.766163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:28:56.200 [2024-12-09 10:39:40.766239] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:56.200 [2024-12-09 10:39:40.766279] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:28:56.200 [2024-12-09 10:39:40.766325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:28:56.200 request: 00:28:56.200 { 00:28:56.200 "name": "TLSTEST", 00:28:56.200 "trtype": "tcp", 00:28:56.200 "traddr": "10.0.0.2", 00:28:56.200 "adrfam": "ipv4", 00:28:56.200 "trsvcid": "4420", 00:28:56.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.200 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:56.200 "prchk_reftag": false, 00:28:56.200 "prchk_guard": false, 00:28:56.200 "hdgst": false, 00:28:56.200 "ddgst": false, 00:28:56.200 "psk": "key0", 00:28:56.200 "allow_unrecognized_csi": false, 00:28:56.200 "method": "bdev_nvme_attach_controller", 00:28:56.200 "req_id": 1 00:28:56.200 } 00:28:56.200 Got JSON-RPC error response 00:28:56.200 response: 00:28:56.200 { 00:28:56.200 "code": -5, 00:28:56.200 "message": "Input/output error" 00:28:56.200 } 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2140130 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2140130 ']' 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2140130 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140130 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140130' 00:28:56.200 killing process with pid 2140130 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2140130 00:28:56.200 Received shutdown signal, test time was about 10.000000 seconds 00:28:56.200 00:28:56.200 Latency(us) 00:28:56.200 [2024-12-09T09:39:40.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.200 [2024-12-09T09:39:40.854Z] =================================================================================================================== 00:28:56.200 [2024-12-09T09:39:40.854Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:56.200 10:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2140130 00:28:56.769 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:28:56.769 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:28:56.769 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gmrKDoyOIe 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gmrKDoyOIe 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gmrKDoyOIe 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gmrKDoyOIe 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2140394 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2140394 /var/tmp/bdevperf.sock 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2140394 ']' 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:56.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.770 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:56.770 [2024-12-09 10:39:41.216155] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:56.770 [2024-12-09 10:39:41.216269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140394 ] 00:28:56.770 [2024-12-09 10:39:41.355427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.029 [2024-12-09 10:39:41.474527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.029 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.029 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:57.029 10:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gmrKDoyOIe 00:28:57.598 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:57.858 [2024-12-09 10:39:42.396584] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:57.858 [2024-12-09 10:39:42.406907] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:28:57.858 [2024-12-09 10:39:42.406943] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:28:57.858 [2024-12-09 10:39:42.406988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:57.858 [2024-12-09 10:39:42.407200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270580 (107): Transport endpoint is not connected 00:28:57.858 [2024-12-09 10:39:42.408200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270580 (9): Bad file descriptor 00:28:57.858 [2024-12-09 10:39:42.409195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:28:57.858 [2024-12-09 10:39:42.409248] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:28:57.858 [2024-12-09 10:39:42.409284] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:28:57.858 [2024-12-09 10:39:42.409334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:28:57.858 request: 00:28:57.858 { 00:28:57.858 "name": "TLSTEST", 00:28:57.858 "trtype": "tcp", 00:28:57.858 "traddr": "10.0.0.2", 00:28:57.858 "adrfam": "ipv4", 00:28:57.858 "trsvcid": "4420", 00:28:57.858 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:57.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.858 "prchk_reftag": false, 00:28:57.858 "prchk_guard": false, 00:28:57.858 "hdgst": false, 00:28:57.858 "ddgst": false, 00:28:57.858 "psk": "key0", 00:28:57.858 "allow_unrecognized_csi": false, 00:28:57.858 "method": "bdev_nvme_attach_controller", 00:28:57.858 "req_id": 1 00:28:57.858 } 00:28:57.858 Got JSON-RPC error response 00:28:57.858 response: 00:28:57.858 { 00:28:57.858 "code": -5, 00:28:57.858 "message": "Input/output error" 00:28:57.858 } 00:28:57.858 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2140394 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2140394 ']' 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2140394 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140394 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140394' 00:28:57.859 killing process with pid 2140394 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2140394 00:28:57.859 Received shutdown signal, test time was about 10.000000 seconds 00:28:57.859 00:28:57.859 Latency(us) 00:28:57.859 [2024-12-09T09:39:42.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.859 [2024-12-09T09:39:42.513Z] =================================================================================================================== 00:28:57.859 [2024-12-09T09:39:42.513Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:57.859 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2140394 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:58.431 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2140532 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2140532 /var/tmp/bdevperf.sock 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2140532 ']' 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.432 10:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:58.432 [2024-12-09 10:39:42.974706] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:28:58.432 [2024-12-09 10:39:42.974880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140532 ] 00:28:58.692 [2024-12-09 10:39:43.136247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.692 [2024-12-09 10:39:43.249243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.951 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.951 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:28:58.951 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:28:59.210 [2024-12-09 10:39:43.745676] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:28:59.210 [2024-12-09 10:39:43.745734] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:59.210 request: 00:28:59.210 { 00:28:59.210 "name": "key0", 00:28:59.210 "path": "", 00:28:59.210 "method": "keyring_file_add_key", 00:28:59.210 "req_id": 1 00:28:59.210 } 00:28:59.210 Got JSON-RPC error response 00:28:59.210 response: 00:28:59.210 { 00:28:59.210 "code": -1, 00:28:59.210 "message": "Operation not permitted" 00:28:59.210 } 00:28:59.210 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:59.470 [2024-12-09 10:39:44.082962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:59.470 [2024-12-09 10:39:44.083074] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:28:59.470 request: 00:28:59.470 { 00:28:59.470 "name": "TLSTEST", 00:28:59.470 "trtype": "tcp", 00:28:59.470 "traddr": "10.0.0.2", 00:28:59.470 "adrfam": "ipv4", 00:28:59.470 "trsvcid": "4420", 00:28:59.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.470 "prchk_reftag": false, 00:28:59.470 "prchk_guard": false, 00:28:59.470 "hdgst": false, 00:28:59.470 "ddgst": false, 00:28:59.470 "psk": "key0", 00:28:59.470 "allow_unrecognized_csi": false, 00:28:59.470 "method": "bdev_nvme_attach_controller", 00:28:59.470 "req_id": 1 00:28:59.470 } 00:28:59.470 Got JSON-RPC error response 00:28:59.470 response: 00:28:59.470 { 00:28:59.470 "code": -126, 00:28:59.470 "message": "Required key not available" 00:28:59.470 } 00:28:59.470 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2140532 00:28:59.470 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2140532 ']' 00:28:59.470 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2140532 00:28:59.470 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:59.471 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.471 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140532 00:28:59.731 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:59.731 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:59.731 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140532' 00:28:59.731 killing process with pid 2140532 00:28:59.731 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2140532 00:28:59.731 Received shutdown signal, test time was about 10.000000 seconds 00:28:59.731 00:28:59.731 Latency(us) 00:28:59.731 [2024-12-09T09:39:44.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.731 [2024-12-09T09:39:44.385Z] =================================================================================================================== 00:28:59.731 [2024-12-09T09:39:44.385Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:59.731 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2140532 00:28:59.992 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2135737 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2135737 ']' 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2135737 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2135737 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2135737' 00:28:59.993 killing process with pid 2135737 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2135737 00:28:59.993 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2135737 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:29:00.572 10:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Qdhfn96uAt 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Qdhfn96uAt 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2140815 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2140815 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2140815 ']' 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.572 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:00.572 [2024-12-09 10:39:45.143100] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:00.572 [2024-12-09 10:39:45.143288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.833 [2024-12-09 10:39:45.327248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.833 [2024-12-09 10:39:45.442406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.833 [2024-12-09 10:39:45.442513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.833 [2024-12-09 10:39:45.442549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.833 [2024-12-09 10:39:45.442578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.833 [2024-12-09 10:39:45.442603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.833 [2024-12-09 10:39:45.443572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.094 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.094 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:01.094 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.094 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.094 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:01.353 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.353 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:01.353 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qdhfn96uAt 00:29:01.353 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:01.613 [2024-12-09 10:39:46.085847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.613 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:02.183 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:02.443 [2024-12-09 10:39:47.057774] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:02.443 [2024-12-09 10:39:47.058286] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.443 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:03.013 malloc0 00:29:03.013 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:03.273 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:03.843 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qdhfn96uAt 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qdhfn96uAt 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2141237 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2141237 /var/tmp/bdevperf.sock 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2141237 ']' 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.103 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:04.103 [2024-12-09 10:39:48.744393] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:04.103 [2024-12-09 10:39:48.744483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141237 ] 00:29:04.363 [2024-12-09 10:39:48.875399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.363 [2024-12-09 10:39:48.989471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.933 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.933 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:04.933 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:05.193 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:05.451 [2024-12-09 10:39:50.102572] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:05.712 TLSTESTn1 00:29:05.712 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:05.712 Running I/O for 10 seconds... 00:29:08.028 1512.00 IOPS, 5.91 MiB/s [2024-12-09T09:39:53.618Z] 1548.50 IOPS, 6.05 MiB/s [2024-12-09T09:39:54.554Z] 1959.33 IOPS, 7.65 MiB/s [2024-12-09T09:39:55.492Z] 1936.75 IOPS, 7.57 MiB/s [2024-12-09T09:39:56.429Z] 2049.80 IOPS, 8.01 MiB/s [2024-12-09T09:39:57.368Z] 2071.33 IOPS, 8.09 MiB/s [2024-12-09T09:39:58.747Z] 2093.57 IOPS, 8.18 MiB/s [2024-12-09T09:39:59.682Z] 2145.50 IOPS, 8.38 MiB/s [2024-12-09T09:40:00.623Z] 2110.33 IOPS, 8.24 MiB/s [2024-12-09T09:40:00.623Z] 2163.20 IOPS, 8.45 MiB/s 00:29:15.969 Latency(us) 00:29:15.969 [2024-12-09T09:40:00.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.969 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:15.969 Verification LBA range: start 0x0 length 0x2000 00:29:15.969 TLSTESTn1 : 10.02 2171.13 8.48 0.00 0.00 58843.93 6990.51 61361.11 00:29:15.969 [2024-12-09T09:40:00.623Z] =================================================================================================================== 00:29:15.969 [2024-12-09T09:40:00.623Z] Total : 2171.13 8.48 0.00 0.00 58843.93 6990.51 61361.11 00:29:15.969 { 00:29:15.969 "results": [ 00:29:15.969 { 00:29:15.969 "job": "TLSTESTn1", 00:29:15.969 "core_mask": "0x4", 00:29:15.969 "workload": "verify", 00:29:15.969 "status": "finished", 00:29:15.969 "verify_range": { 00:29:15.969 "start": 0, 00:29:15.969 "length": 8192 00:29:15.969 }, 00:29:15.969 "queue_depth": 128, 00:29:15.969 "io_size": 4096, 00:29:15.969 "runtime": 10.021521, 00:29:15.969 "iops": 2171.127516471801, 00:29:15.969 "mibps": 8.480966861217972, 00:29:15.969 "io_failed": 0, 00:29:15.969 "io_timeout": 0, 00:29:15.969 "avg_latency_us": 58843.93087218664, 00:29:15.969 "min_latency_us": 6990.506666666667, 00:29:15.969 "max_latency_us": 61361.114074074074 00:29:15.969 } 00:29:15.969 ], 00:29:15.969 "core_count": 1 00:29:15.969 } 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2141237 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2141237 ']' 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2141237 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141237 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141237' 00:29:15.969 killing process with pid 2141237 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2141237 00:29:15.969 Received shutdown signal, test time was about 10.000000 seconds 00:29:15.969 00:29:15.969 Latency(us) 00:29:15.969 [2024-12-09T09:40:00.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.969 [2024-12-09T09:40:00.623Z] =================================================================================================================== 00:29:15.969 [2024-12-09T09:40:00.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.969 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2141237 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Qdhfn96uAt 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qdhfn96uAt 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qdhfn96uAt 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qdhfn96uAt 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qdhfn96uAt 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2142675 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2142675 /var/tmp/bdevperf.sock 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2142675 ']' 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.230 10:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:16.230 [2024-12-09 10:40:00.813794] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:16.230 [2024-12-09 10:40:00.813892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142675 ] 00:29:16.491 [2024-12-09 10:40:00.947847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.491 [2024-12-09 10:40:01.064872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.751 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.751 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:16.751 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:17.009 [2024-12-09 10:40:01.616470] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qdhfn96uAt': 0100666 00:29:17.009 [2024-12-09 10:40:01.616520] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:17.009 request: 00:29:17.009 { 00:29:17.009 "name": "key0", 00:29:17.009 "path": "/tmp/tmp.Qdhfn96uAt", 00:29:17.009 "method": "keyring_file_add_key", 00:29:17.009 "req_id": 1 00:29:17.009 } 00:29:17.009 Got JSON-RPC error response 00:29:17.009 response: 00:29:17.009 { 00:29:17.009 "code": -1, 00:29:17.009 "message": "Operation not permitted" 00:29:17.009 } 00:29:17.009 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:17.574 [2024-12-09 10:40:02.017662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:17.574 [2024-12-09 10:40:02.017776] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:29:17.574 request: 00:29:17.574 { 00:29:17.574 "name": "TLSTEST", 00:29:17.574 "trtype": "tcp", 00:29:17.574 "traddr": "10.0.0.2", 00:29:17.574 "adrfam": "ipv4", 00:29:17.574 "trsvcid": "4420", 00:29:17.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:17.574 "prchk_reftag": false, 00:29:17.574 "prchk_guard": false, 00:29:17.574 "hdgst": false, 00:29:17.574 "ddgst": false, 00:29:17.574 "psk": "key0", 00:29:17.574 "allow_unrecognized_csi": false, 00:29:17.574 "method": "bdev_nvme_attach_controller", 00:29:17.574 "req_id": 1 00:29:17.574 } 00:29:17.574 Got JSON-RPC error response 00:29:17.574 response: 00:29:17.574 { 00:29:17.574 "code": -126, 00:29:17.574 "message": "Required key not available" 00:29:17.574 } 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2142675 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2142675 ']' 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2142675 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142675 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142675' 00:29:17.574 killing process with pid 2142675 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2142675 00:29:17.574 Received shutdown signal, test time was about 10.000000 seconds 00:29:17.574 00:29:17.574 Latency(us) 00:29:17.574 [2024-12-09T09:40:02.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.574 [2024-12-09T09:40:02.228Z] =================================================================================================================== 00:29:17.574 [2024-12-09T09:40:02.228Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.574 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2142675 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2140815 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2140815 ']' 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2140815 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140815 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140815' 00:29:17.834 killing process with pid 2140815 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2140815 00:29:17.834 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2140815 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2142867 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2142867 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2142867 ']' 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.405 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:18.405 [2024-12-09 10:40:02.894625] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:18.405 [2024-12-09 10:40:02.894848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.666 [2024-12-09 10:40:03.079637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.666 [2024-12-09 10:40:03.196757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.666 [2024-12-09 10:40:03.196877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.666 [2024-12-09 10:40:03.196913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.666 [2024-12-09 10:40:03.196961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.666 [2024-12-09 10:40:03.196989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.666 [2024-12-09 10:40:03.198339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qdhfn96uAt 00:29:18.927 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:19.868 [2024-12-09 10:40:04.186162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.868 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:20.439 10:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:21.015 [2024-12-09 10:40:05.599434] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:21.015 [2024-12-09 10:40:05.599965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.015 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:21.671 malloc0 00:29:21.931 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:22.502 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:23.092 [2024-12-09 10:40:07.713955] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qdhfn96uAt': 0100666 00:29:23.092 [2024-12-09 10:40:07.714048] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:23.092 request: 00:29:23.092 { 00:29:23.092 "name": "key0", 00:29:23.092 "path": "/tmp/tmp.Qdhfn96uAt", 00:29:23.092 "method": "keyring_file_add_key", 00:29:23.092 "req_id": 1 00:29:23.092 } 00:29:23.092 Got JSON-RPC error response 00:29:23.092 response: 00:29:23.092 { 00:29:23.092 "code": -1, 00:29:23.092 "message": "Operation not permitted" 00:29:23.092 } 00:29:23.092 10:40:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:29:24.029 [2024-12-09 10:40:08.384043] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:29:24.029 [2024-12-09 10:40:08.384184] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:29:24.029 request: 00:29:24.029 { 00:29:24.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.029 "host": "nqn.2016-06.io.spdk:host1", 00:29:24.029 "psk": "key0", 00:29:24.029 "method": "nvmf_subsystem_add_host", 00:29:24.029 "req_id": 1 00:29:24.029 } 00:29:24.029 Got JSON-RPC error response 00:29:24.029 response: 00:29:24.029 { 00:29:24.029 "code": -32603, 00:29:24.029 "message": "Internal error" 00:29:24.029 } 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2142867 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2142867 ']' 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2142867 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142867 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142867' 00:29:24.029 killing process with pid 2142867 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2142867 00:29:24.029 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2142867 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Qdhfn96uAt 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2143643 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2143643 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2143643 ']' 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.288 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:24.288 [2024-12-09 10:40:08.936132] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:24.288 [2024-12-09 10:40:08.936239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.548 [2024-12-09 10:40:09.083479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.548 [2024-12-09 10:40:09.199413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.548 [2024-12-09 10:40:09.199528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.548 [2024-12-09 10:40:09.199564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.548 [2024-12-09 10:40:09.199601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.548 [2024-12-09 10:40:09.199614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.548 [2024-12-09 10:40:09.200432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qdhfn96uAt 00:29:25.116 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:25.377 [2024-12-09 10:40:09.908449] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.377 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:26.333 10:40:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:26.902 [2024-12-09 10:40:11.329418] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:26.902 [2024-12-09 10:40:11.329856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.902 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:27.471 malloc0 00:29:27.471 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:28.039 10:40:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:28.975 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2144193 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2144193 /var/tmp/bdevperf.sock 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2144193 ']' 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.545 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.546 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.546 10:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:29.546 [2024-12-09 10:40:14.063776] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:29.546 [2024-12-09 10:40:14.063871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144193 ] 00:29:29.546 [2024-12-09 10:40:14.195424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.805 [2024-12-09 10:40:14.314325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.740 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.740 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:30.740 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:31.309 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:31.567 [2024-12-09 10:40:16.130485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:31.567 TLSTESTn1 00:29:31.826 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:29:32.087 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:29:32.087 "subsystems": [ 00:29:32.087 { 00:29:32.087 "subsystem": "keyring", 00:29:32.087 "config": [ 00:29:32.087 { 00:29:32.087 "method": "keyring_file_add_key", 00:29:32.087 "params": { 00:29:32.087 "name": "key0", 00:29:32.087 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:32.087 } 00:29:32.087 } 00:29:32.087 ] 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "subsystem": "iobuf", 00:29:32.087 "config": [ 00:29:32.087 { 00:29:32.087 "method": "iobuf_set_options", 00:29:32.087 "params": { 00:29:32.087 "small_pool_count": 8192, 00:29:32.087 "large_pool_count": 1024, 00:29:32.087 "small_bufsize": 8192, 00:29:32.087 "large_bufsize": 135168, 00:29:32.087 "enable_numa": false 00:29:32.087 } 00:29:32.087 } 00:29:32.087 ] 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "subsystem": "sock", 00:29:32.087 "config": [ 00:29:32.087 { 00:29:32.087 "method": "sock_set_default_impl", 00:29:32.087 "params": { 00:29:32.087 "impl_name": "posix" 00:29:32.087 } 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "method": "sock_impl_set_options", 00:29:32.087 "params": { 00:29:32.087 "impl_name": "ssl", 00:29:32.087 "recv_buf_size": 4096, 00:29:32.087 "send_buf_size": 4096, 00:29:32.087 "enable_recv_pipe": true, 00:29:32.087 "enable_quickack": false, 00:29:32.087 "enable_placement_id": 0, 00:29:32.087 "enable_zerocopy_send_server": true, 00:29:32.087 "enable_zerocopy_send_client": false, 00:29:32.087 "zerocopy_threshold": 0, 00:29:32.087 "tls_version": 0, 00:29:32.087 "enable_ktls": false 00:29:32.087 } 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "method": "sock_impl_set_options", 00:29:32.087 "params": { 00:29:32.087 "impl_name": "posix", 00:29:32.087 "recv_buf_size": 2097152, 00:29:32.087 "send_buf_size": 2097152, 00:29:32.087 "enable_recv_pipe": true, 00:29:32.087 "enable_quickack": false, 00:29:32.087 "enable_placement_id": 0, 00:29:32.087 "enable_zerocopy_send_server": true, 00:29:32.087 "enable_zerocopy_send_client": false, 00:29:32.087 "zerocopy_threshold": 0, 00:29:32.087 "tls_version": 0, 00:29:32.087 "enable_ktls": false 00:29:32.087 } 00:29:32.087 } 00:29:32.087 ] 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "subsystem": "vmd", 00:29:32.087 "config": [] 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "subsystem": "accel", 00:29:32.087 "config": [ 00:29:32.087 { 00:29:32.087 "method": "accel_set_options", 00:29:32.087 "params": { 00:29:32.087 "small_cache_size": 128, 00:29:32.087 "large_cache_size": 16, 00:29:32.087 "task_count": 2048, 00:29:32.087 "sequence_count": 2048, 00:29:32.087 "buf_count": 2048 00:29:32.087 } 00:29:32.087 } 00:29:32.087 ] 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "subsystem": "bdev", 00:29:32.087 "config": [ 00:29:32.087 { 00:29:32.087 "method": "bdev_set_options", 00:29:32.087 "params": { 00:29:32.087 "bdev_io_pool_size": 65535, 00:29:32.087 "bdev_io_cache_size": 256, 00:29:32.087 "bdev_auto_examine": true, 00:29:32.087 "iobuf_small_cache_size": 128, 00:29:32.087 "iobuf_large_cache_size": 16 00:29:32.087 } 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "method": "bdev_raid_set_options", 00:29:32.087 "params": { 00:29:32.087 "process_window_size_kb": 1024, 00:29:32.087 "process_max_bandwidth_mb_sec": 0 00:29:32.087 } 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "method": "bdev_iscsi_set_options", 00:29:32.087 "params": { 00:29:32.087 "timeout_sec": 30 00:29:32.087 } 00:29:32.087 }, 00:29:32.087 { 00:29:32.087 "method": "bdev_nvme_set_options", 00:29:32.087 "params": { 00:29:32.087 "action_on_timeout": "none", 00:29:32.087 "timeout_us": 0, 00:29:32.087 "timeout_admin_us": 0, 00:29:32.087 "keep_alive_timeout_ms": 10000, 00:29:32.088 "arbitration_burst": 0, 00:29:32.088 "low_priority_weight": 0, 00:29:32.088 "medium_priority_weight": 0, 00:29:32.088 "high_priority_weight": 0, 00:29:32.088 "nvme_adminq_poll_period_us": 10000, 00:29:32.088 "nvme_ioq_poll_period_us": 0, 00:29:32.088 "io_queue_requests": 0, 00:29:32.088 "delay_cmd_submit": true, 00:29:32.088 "transport_retry_count": 4, 00:29:32.088 "bdev_retry_count": 3, 00:29:32.088 "transport_ack_timeout": 0, 00:29:32.088 "ctrlr_loss_timeout_sec": 0, 00:29:32.088 "reconnect_delay_sec": 0, 00:29:32.088 "fast_io_fail_timeout_sec": 0, 00:29:32.088 "disable_auto_failback": false, 00:29:32.088 "generate_uuids": false, 00:29:32.088 "transport_tos": 0, 00:29:32.088 "nvme_error_stat": false, 00:29:32.088 "rdma_srq_size": 0, 00:29:32.088 "io_path_stat": false, 00:29:32.088 "allow_accel_sequence": false, 00:29:32.088 "rdma_max_cq_size": 0, 00:29:32.088 "rdma_cm_event_timeout_ms": 0, 00:29:32.088 "dhchap_digests": [ 00:29:32.088 "sha256", 00:29:32.088 "sha384", 00:29:32.088 "sha512" 00:29:32.088 ], 00:29:32.088 "dhchap_dhgroups": [ 00:29:32.088 "null", 00:29:32.088 "ffdhe2048", 00:29:32.088 "ffdhe3072", 00:29:32.088 "ffdhe4096", 00:29:32.088 "ffdhe6144", 00:29:32.088 "ffdhe8192" 00:29:32.088 ] 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "bdev_nvme_set_hotplug", 00:29:32.088 "params": { 00:29:32.088 "period_us": 100000, 00:29:32.088 "enable": false 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "bdev_malloc_create", 00:29:32.088 "params": { 00:29:32.088 "name": "malloc0", 00:29:32.088 "num_blocks": 8192, 00:29:32.088 "block_size": 4096, 00:29:32.088 "physical_block_size": 4096, 00:29:32.088 "uuid": "8502efd9-06f2-4819-939a-b4033adbf68c", 00:29:32.088 "optimal_io_boundary": 0, 00:29:32.088 "md_size": 0, 00:29:32.088 "dif_type": 0, 00:29:32.088 "dif_is_head_of_md": false, 00:29:32.088 "dif_pi_format": 0 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "bdev_wait_for_examine" 00:29:32.088 } 00:29:32.088 ] 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "subsystem": "nbd", 00:29:32.088 "config": [] 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "subsystem": "scheduler", 00:29:32.088 "config": [ 00:29:32.088 { 00:29:32.088 "method": "framework_set_scheduler", 00:29:32.088 "params": { 00:29:32.088 "name": "static" 00:29:32.088 } 00:29:32.088 } 00:29:32.088 ] 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "subsystem": "nvmf", 00:29:32.088 "config": [ 00:29:32.088 { 00:29:32.088 "method": "nvmf_set_config", 00:29:32.088 "params": { 00:29:32.088 "discovery_filter": "match_any", 00:29:32.088 "admin_cmd_passthru": { 00:29:32.088 "identify_ctrlr": false 00:29:32.088 }, 00:29:32.088 "dhchap_digests": [ 00:29:32.088 "sha256", 00:29:32.088 "sha384", 00:29:32.088 "sha512" 00:29:32.088 ], 00:29:32.088 "dhchap_dhgroups": [ 00:29:32.088 "null", 00:29:32.088 "ffdhe2048", 00:29:32.088 "ffdhe3072", 00:29:32.088 "ffdhe4096", 00:29:32.088 "ffdhe6144", 00:29:32.088 "ffdhe8192" 00:29:32.088 ] 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_set_max_subsystems", 00:29:32.088 "params": { 00:29:32.088 "max_subsystems": 1024 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_set_crdt", 00:29:32.088 "params": { 00:29:32.088 "crdt1": 0, 00:29:32.088 "crdt2": 0, 00:29:32.088 "crdt3": 0 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_create_transport", 00:29:32.088 "params": { 00:29:32.088 "trtype": "TCP", 00:29:32.088 "max_queue_depth": 128, 00:29:32.088 "max_io_qpairs_per_ctrlr": 127, 00:29:32.088 "in_capsule_data_size": 4096, 00:29:32.088 "max_io_size": 131072, 00:29:32.088 "io_unit_size": 131072, 00:29:32.088 "max_aq_depth": 128, 00:29:32.088 "num_shared_buffers": 511, 00:29:32.088 "buf_cache_size": 4294967295, 00:29:32.088 "dif_insert_or_strip": false, 00:29:32.088 "zcopy": false, 00:29:32.088 "c2h_success": false, 00:29:32.088 "sock_priority": 0, 00:29:32.088 "abort_timeout_sec": 1, 00:29:32.088 "ack_timeout": 0, 00:29:32.088 "data_wr_pool_size": 0 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_create_subsystem", 00:29:32.088 "params": { 00:29:32.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.088 "allow_any_host": false, 00:29:32.088 "serial_number": "SPDK00000000000001", 00:29:32.088 "model_number": "SPDK bdev Controller", 00:29:32.088 "max_namespaces": 10, 00:29:32.088 "min_cntlid": 1, 00:29:32.088 "max_cntlid": 65519, 00:29:32.088 "ana_reporting": false 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_subsystem_add_host", 00:29:32.088 "params": { 00:29:32.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.088 "host": "nqn.2016-06.io.spdk:host1", 00:29:32.088 "psk": "key0" 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_subsystem_add_ns", 00:29:32.088 "params": { 00:29:32.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.088 "namespace": { 00:29:32.088 "nsid": 1, 00:29:32.088 "bdev_name": "malloc0", 00:29:32.088 "nguid": "8502EFD906F24819939AB4033ADBF68C", 00:29:32.088 "uuid": "8502efd9-06f2-4819-939a-b4033adbf68c", 00:29:32.088 "no_auto_visible": false 00:29:32.088 } 00:29:32.088 } 00:29:32.088 }, 00:29:32.088 { 00:29:32.088 "method": "nvmf_subsystem_add_listener", 00:29:32.088 "params": { 00:29:32.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.088 "listen_address": { 00:29:32.088 "trtype": "TCP", 00:29:32.088 "adrfam": "IPv4", 00:29:32.088 "traddr": "10.0.0.2", 00:29:32.088 "trsvcid": "4420" 00:29:32.088 }, 00:29:32.088 "secure_channel": true 00:29:32.088 } 00:29:32.088 } 00:29:32.088 ] 00:29:32.088 } 00:29:32.088 ] 00:29:32.088 }' 00:29:32.088 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:29:32.659 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:29:32.659 "subsystems": [ 00:29:32.659 { 00:29:32.659 "subsystem": "keyring", 00:29:32.659 "config": [ 00:29:32.659 { 00:29:32.659 "method": "keyring_file_add_key", 00:29:32.659 "params": { 00:29:32.659 "name": "key0", 00:29:32.659 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:32.659 } 00:29:32.659 } 00:29:32.659 ] 00:29:32.659 }, 00:29:32.659 { 00:29:32.659 "subsystem": "iobuf", 00:29:32.659 "config": [ 00:29:32.659 { 00:29:32.659 "method": "iobuf_set_options", 00:29:32.659 "params": { 00:29:32.659 "small_pool_count": 8192, 00:29:32.659 "large_pool_count": 1024, 00:29:32.659 "small_bufsize": 8192, 00:29:32.659 "large_bufsize": 135168, 00:29:32.659 "enable_numa": false 00:29:32.659 } 00:29:32.659 } 00:29:32.659 ] 00:29:32.659 }, 00:29:32.659 { 00:29:32.659 "subsystem": "sock", 00:29:32.659 "config": [ 00:29:32.659 { 00:29:32.659 "method": "sock_set_default_impl", 00:29:32.659 "params": { 00:29:32.659 "impl_name": "posix" 00:29:32.659 } 00:29:32.659 }, 00:29:32.659 { 00:29:32.659 "method": "sock_impl_set_options", 00:29:32.659 "params": { 00:29:32.659 "impl_name": "ssl", 00:29:32.659 "recv_buf_size": 4096, 00:29:32.659 "send_buf_size": 4096, 00:29:32.659 "enable_recv_pipe": true, 00:29:32.659 "enable_quickack": false, 00:29:32.659 "enable_placement_id": 0, 00:29:32.659 "enable_zerocopy_send_server": true, 00:29:32.659 "enable_zerocopy_send_client": false, 00:29:32.659 "zerocopy_threshold": 0, 00:29:32.659 "tls_version": 0, 00:29:32.659 "enable_ktls": false 00:29:32.659 } 00:29:32.659 }, 00:29:32.660 { 00:29:32.660 "method": "sock_impl_set_options", 00:29:32.660 "params": { 00:29:32.660 "impl_name": "posix", 00:29:32.660 "recv_buf_size": 2097152, 00:29:32.660 "send_buf_size": 2097152, 00:29:32.660 "enable_recv_pipe": true, 00:29:32.660 "enable_quickack": false, 00:29:32.660 "enable_placement_id": 0, 00:29:32.660 "enable_zerocopy_send_server": true, 00:29:32.660 "enable_zerocopy_send_client": false, 00:29:32.660 "zerocopy_threshold": 0, 00:29:32.660 "tls_version": 0, 00:29:32.660 "enable_ktls": false 00:29:32.660 } 00:29:32.660 } 00:29:32.660 ] 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "subsystem": "vmd", 00:29:32.660 "config": [] 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "subsystem": "accel", 00:29:32.660 "config": [ 00:29:32.660 { 00:29:32.660 "method": "accel_set_options", 00:29:32.660 "params": { 00:29:32.660 "small_cache_size": 128, 00:29:32.660 "large_cache_size": 16, 00:29:32.660 "task_count": 2048, 00:29:32.660 "sequence_count": 2048, 00:29:32.660 "buf_count": 2048 00:29:32.660 } 00:29:32.660 } 00:29:32.660 ] 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "subsystem": "bdev", 00:29:32.660 "config": [ 00:29:32.660 { 00:29:32.660 "method": "bdev_set_options", 00:29:32.660 "params": { 00:29:32.660 "bdev_io_pool_size": 65535, 00:29:32.660 "bdev_io_cache_size": 256, 00:29:32.660 "bdev_auto_examine": true, 00:29:32.660 "iobuf_small_cache_size": 128, 00:29:32.660 "iobuf_large_cache_size": 16 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_raid_set_options", 00:29:32.660 "params": { 00:29:32.660 "process_window_size_kb": 1024, 00:29:32.660 "process_max_bandwidth_mb_sec": 0 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_iscsi_set_options", 00:29:32.660 "params": { 00:29:32.660 "timeout_sec": 30 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_nvme_set_options", 00:29:32.660 "params": { 00:29:32.660 "action_on_timeout": "none", 00:29:32.660 "timeout_us": 0, 00:29:32.660 "timeout_admin_us": 0, 00:29:32.660 "keep_alive_timeout_ms": 10000, 00:29:32.660 "arbitration_burst": 0, 00:29:32.660 "low_priority_weight": 0, 00:29:32.660 "medium_priority_weight": 0, 00:29:32.660 "high_priority_weight": 0, 00:29:32.660 "nvme_adminq_poll_period_us": 10000, 00:29:32.660 "nvme_ioq_poll_period_us": 0, 00:29:32.660 "io_queue_requests": 512, 00:29:32.660 "delay_cmd_submit": true, 00:29:32.660 "transport_retry_count": 4, 00:29:32.660 "bdev_retry_count": 3, 00:29:32.660 "transport_ack_timeout": 0, 00:29:32.660 "ctrlr_loss_timeout_sec": 0, 00:29:32.660 "reconnect_delay_sec": 0, 00:29:32.660 "fast_io_fail_timeout_sec": 0, 00:29:32.660 "disable_auto_failback": false, 00:29:32.660 "generate_uuids": false, 00:29:32.660 "transport_tos": 0, 00:29:32.660 "nvme_error_stat": false, 00:29:32.660 "rdma_srq_size": 0, 00:29:32.660 "io_path_stat": false, 00:29:32.660 "allow_accel_sequence": false, 00:29:32.660 "rdma_max_cq_size": 0, 00:29:32.660 "rdma_cm_event_timeout_ms": 0, 00:29:32.660 "dhchap_digests": [ 00:29:32.660 "sha256", 00:29:32.660 "sha384", 00:29:32.660 "sha512" 00:29:32.660 ], 00:29:32.660 "dhchap_dhgroups": [ 00:29:32.660 "null", 00:29:32.660 "ffdhe2048", 00:29:32.660 "ffdhe3072", 00:29:32.660 "ffdhe4096", 00:29:32.660 "ffdhe6144", 00:29:32.660 "ffdhe8192" 00:29:32.660 ] 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_nvme_attach_controller", 00:29:32.660 "params": { 00:29:32.660 "name": "TLSTEST", 00:29:32.660 "trtype": "TCP", 00:29:32.660 "adrfam": "IPv4", 00:29:32.660 "traddr": "10.0.0.2", 00:29:32.660 "trsvcid": "4420", 00:29:32.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.660 "prchk_reftag": false, 00:29:32.660 "prchk_guard": false, 00:29:32.660 "ctrlr_loss_timeout_sec": 0, 00:29:32.660 "reconnect_delay_sec": 0, 00:29:32.660 "fast_io_fail_timeout_sec": 0, 00:29:32.660 "psk": "key0", 00:29:32.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.660 "hdgst": false, 00:29:32.660 "ddgst": false, 00:29:32.660 "multipath": "multipath" 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_nvme_set_hotplug", 00:29:32.660 "params": { 00:29:32.660 "period_us": 100000, 00:29:32.660 "enable": false 00:29:32.660 } 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "method": "bdev_wait_for_examine" 00:29:32.660 } 00:29:32.660 ] 00:29:32.660 }, 00:29:32.660 { 00:29:32.660 "subsystem": "nbd", 00:29:32.660 "config": [] 00:29:32.660 } 00:29:32.660 ] 00:29:32.660 }' 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2144193 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2144193 ']' 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2144193 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144193 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144193' 00:29:32.660 killing process with pid 2144193 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2144193 00:29:32.660 Received shutdown signal, test time was about 10.000000 seconds 00:29:32.660 00:29:32.660 Latency(us) 00:29:32.660 [2024-12-09T09:40:17.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.660 [2024-12-09T09:40:17.314Z] =================================================================================================================== 00:29:32.660 [2024-12-09T09:40:17.314Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:32.660 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2144193 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2143643 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2143643 ']' 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2143643 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.920 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143643 00:29:33.181 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:33.181 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:33.181 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143643' 00:29:33.181 killing process with pid 2143643 00:29:33.181 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2143643 00:29:33.181 10:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2143643 00:29:33.441 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:29:33.441 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.441 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:29:33.441 "subsystems": [ 00:29:33.441 { 00:29:33.441 "subsystem": "keyring", 00:29:33.441 "config": [ 00:29:33.441 { 00:29:33.441 "method": "keyring_file_add_key", 00:29:33.441 "params": { 00:29:33.441 "name": "key0", 00:29:33.441 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:33.441 } 00:29:33.441 } 00:29:33.441 ] 00:29:33.441 }, 00:29:33.441 { 00:29:33.441 "subsystem": "iobuf", 00:29:33.441 "config": [ 00:29:33.441 { 00:29:33.441 "method": "iobuf_set_options", 00:29:33.441 "params": { 00:29:33.441 "small_pool_count": 8192, 00:29:33.441 "large_pool_count": 1024, 00:29:33.441 "small_bufsize": 8192, 00:29:33.441 "large_bufsize": 135168, 00:29:33.441 "enable_numa": false 00:29:33.441 } 00:29:33.441 } 00:29:33.441 ] 00:29:33.441 }, 00:29:33.441 { 00:29:33.441 "subsystem": "sock", 00:29:33.441 "config": [ 00:29:33.441 { 00:29:33.441 "method": "sock_set_default_impl", 00:29:33.441 "params": { 00:29:33.441 "impl_name": "posix" 00:29:33.441 } 00:29:33.441 }, 00:29:33.441 { 00:29:33.441 "method": "sock_impl_set_options", 00:29:33.441 "params": { 00:29:33.441 "impl_name": "ssl", 00:29:33.441 "recv_buf_size": 4096, 00:29:33.441 "send_buf_size": 4096, 00:29:33.441 "enable_recv_pipe": true, 00:29:33.441 "enable_quickack": false, 00:29:33.441 "enable_placement_id": 0, 00:29:33.441 "enable_zerocopy_send_server": true, 00:29:33.441 "enable_zerocopy_send_client": false, 00:29:33.441 "zerocopy_threshold": 0, 00:29:33.441 "tls_version": 0, 00:29:33.441 "enable_ktls": false 00:29:33.441 } 00:29:33.441 }, 00:29:33.441 { 00:29:33.441 "method": "sock_impl_set_options", 00:29:33.441 "params": { 00:29:33.441 "impl_name": "posix", 00:29:33.441 "recv_buf_size": 2097152, 00:29:33.441 "send_buf_size": 2097152, 00:29:33.441 "enable_recv_pipe": true, 00:29:33.441 "enable_quickack": false, 00:29:33.442 "enable_placement_id": 0, 00:29:33.442 "enable_zerocopy_send_server": true, 00:29:33.442 "enable_zerocopy_send_client": false, 00:29:33.442 "zerocopy_threshold": 0, 00:29:33.442 "tls_version": 0, 00:29:33.442 "enable_ktls": false 00:29:33.442 } 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "vmd", 00:29:33.442 "config": [] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "accel", 00:29:33.442 "config": [ 00:29:33.442 { 00:29:33.442 "method": "accel_set_options", 00:29:33.442 "params": { 00:29:33.442 "small_cache_size": 128, 00:29:33.442 "large_cache_size": 16, 00:29:33.442 "task_count": 2048, 00:29:33.442 "sequence_count": 2048, 00:29:33.442 "buf_count": 2048 00:29:33.442 } 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "bdev", 00:29:33.442 "config": [ 00:29:33.442 { 00:29:33.442 "method": "bdev_set_options", 00:29:33.442 "params": { 00:29:33.442 "bdev_io_pool_size": 65535, 00:29:33.442 "bdev_io_cache_size": 256, 00:29:33.442 "bdev_auto_examine": true, 00:29:33.442 "iobuf_small_cache_size": 128, 00:29:33.442 "iobuf_large_cache_size": 16 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_raid_set_options", 00:29:33.442 "params": { 00:29:33.442 "process_window_size_kb": 1024, 00:29:33.442 "process_max_bandwidth_mb_sec": 0 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_iscsi_set_options", 00:29:33.442 "params": { 00:29:33.442 "timeout_sec": 30 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_nvme_set_options", 00:29:33.442 "params": { 00:29:33.442 "action_on_timeout": "none", 00:29:33.442 "timeout_us": 0, 00:29:33.442 "timeout_admin_us": 0, 00:29:33.442 "keep_alive_timeout_ms": 10000, 00:29:33.442 "arbitration_burst": 0, 00:29:33.442 "low_priority_weight": 0, 00:29:33.442 "medium_priority_weight": 0, 00:29:33.442 "high_priority_weight": 0, 00:29:33.442 "nvme_adminq_poll_period_us": 10000, 00:29:33.442 "nvme_ioq_poll_period_us": 0, 00:29:33.442 "io_queue_requests": 0, 00:29:33.442 "delay_cmd_submit": true, 00:29:33.442 "transport_retry_count": 4, 00:29:33.442 "bdev_retry_count": 3, 00:29:33.442 "transport_ack_timeout": 0, 00:29:33.442 "ctrlr_loss_timeout_sec": 0, 00:29:33.442 "reconnect_delay_sec": 0, 00:29:33.442 "fast_io_fail_timeout_sec": 0, 00:29:33.442 "disable_auto_failback": false, 00:29:33.442 "generate_uuids": false, 00:29:33.442 "transport_tos": 0, 00:29:33.442 "nvme_error_stat": false, 00:29:33.442 "rdma_srq_size": 0, 00:29:33.442 "io_path_stat": false, 00:29:33.442 "allow_accel_sequence": false, 00:29:33.442 "rdma_max_cq_size": 0, 00:29:33.442 "rdma_cm_event_timeout_ms": 0, 00:29:33.442 "dhchap_digests": [ 00:29:33.442 "sha256", 00:29:33.442 "sha384", 00:29:33.442 "sha512" 00:29:33.442 ], 00:29:33.442 "dhchap_dhgroups": [ 00:29:33.442 "null", 00:29:33.442 "ffdhe2048", 00:29:33.442 "ffdhe3072", 00:29:33.442 "ffdhe4096", 00:29:33.442 "ffdhe6144", 00:29:33.442 "ffdhe8192" 00:29:33.442 ] 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_nvme_set_hotplug", 00:29:33.442 "params": { 00:29:33.442 "period_us": 100000, 00:29:33.442 "enable": false 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_malloc_create", 00:29:33.442 "params": { 00:29:33.442 "name": "malloc0", 00:29:33.442 "num_blocks": 8192, 00:29:33.442 "block_size": 4096, 00:29:33.442 "physical_block_size": 4096, 00:29:33.442 "uuid": "8502efd9-06f2-4819-939a-b4033adbf68c", 00:29:33.442 "optimal_io_boundary": 0, 00:29:33.442 "md_size": 0, 00:29:33.442 "dif_type": 0, 00:29:33.442 "dif_is_head_of_md": false, 00:29:33.442 "dif_pi_format": 0 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "bdev_wait_for_examine" 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "nbd", 00:29:33.442 "config": [] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "scheduler", 00:29:33.442 "config": [ 00:29:33.442 { 00:29:33.442 "method": "framework_set_scheduler", 00:29:33.442 "params": { 00:29:33.442 "name": "static" 00:29:33.442 } 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "subsystem": "nvmf", 00:29:33.442 "config": [ 00:29:33.442 { 00:29:33.442 "method": "nvmf_set_config", 00:29:33.442 "params": { 00:29:33.442 "discovery_filter": "match_any", 00:29:33.442 "admin_cmd_passthru": { 00:29:33.442 "identify_ctrlr": false 00:29:33.442 }, 00:29:33.442 "dhchap_digests": [ 00:29:33.442 "sha256", 00:29:33.442 "sha384", 00:29:33.442 "sha512" 00:29:33.442 ], 00:29:33.442 "dhchap_dhgroups": [ 00:29:33.442 "null", 00:29:33.442 "ffdhe2048", 00:29:33.442 "ffdhe3072", 00:29:33.442 "ffdhe4096", 00:29:33.442 "ffdhe6144", 00:29:33.442 "ffdhe8192" 00:29:33.442 ] 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_set_max_subsystems", 00:29:33.442 "params": { 00:29:33.442 "max_subsystems": 1024 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_set_crdt", 00:29:33.442 "params": { 00:29:33.442 "crdt1": 0, 00:29:33.442 "crdt2": 0, 00:29:33.442 "crdt3": 0 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_create_transport", 00:29:33.442 "params": { 00:29:33.442 "trtype": "TCP", 00:29:33.442 "max_queue_depth": 128, 00:29:33.442 "max_io_qpairs_per_ctrlr": 127, 00:29:33.442 "in_capsule_data_size": 4096, 00:29:33.442 "max_io_size": 131072, 00:29:33.442 "io_unit_size": 131072, 00:29:33.442 "max_aq_depth": 128, 00:29:33.442 "num_shared_buffers": 511, 00:29:33.442 "buf_cache_size": 4294967295, 00:29:33.442 "dif_insert_or_strip": false, 00:29:33.442 "zcopy": false, 00:29:33.442 "c2h_success": false, 00:29:33.442 "sock_priority": 0, 00:29:33.442 "abort_timeout_sec": 1, 00:29:33.442 "ack_timeout": 0, 00:29:33.442 "data_wr_pool_size": 0 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_create_subsystem", 00:29:33.442 "params": { 00:29:33.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.442 "allow_any_host": false, 00:29:33.442 "serial_number": "SPDK00000000000001", 00:29:33.442 "model_number": "SPDK bdev Controller", 00:29:33.442 "max_namespaces": 10, 00:29:33.442 "min_cntlid": 1, 00:29:33.442 "max_cntlid": 65519, 00:29:33.442 "ana_reporting": false 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_subsystem_add_host", 00:29:33.442 "params": { 00:29:33.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.442 "host": "nqn.2016-06.io.spdk:host1", 00:29:33.442 "psk": "key0" 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_subsystem_add_ns", 00:29:33.442 "params": { 00:29:33.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.442 "namespace": { 00:29:33.442 "nsid": 1, 00:29:33.442 "bdev_name": "malloc0", 00:29:33.442 "nguid": "8502EFD906F24819939AB4033ADBF68C", 00:29:33.442 "uuid": "8502efd9-06f2-4819-939a-b4033adbf68c", 00:29:33.442 "no_auto_visible": false 00:29:33.442 } 00:29:33.442 } 00:29:33.442 }, 00:29:33.442 { 00:29:33.442 "method": "nvmf_subsystem_add_listener", 00:29:33.442 "params": { 00:29:33.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.442 "listen_address": { 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.442 "trtype": "TCP", 00:29:33.442 "adrfam": "IPv4", 00:29:33.442 "traddr": "10.0.0.2", 00:29:33.442 "trsvcid": "4420" 00:29:33.442 }, 00:29:33.442 "secure_channel": true 00:29:33.442 } 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 } 00:29:33.442 ] 00:29:33.442 }' 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2144615 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2144615 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2144615 ']' 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.442 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:33.702 [2024-12-09 10:40:18.149697] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:33.702 [2024-12-09 10:40:18.149887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.702 [2024-12-09 10:40:18.329234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.961 [2024-12-09 10:40:18.443680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.961 [2024-12-09 10:40:18.443824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.961 [2024-12-09 10:40:18.443864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.961 [2024-12-09 10:40:18.443904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.961 [2024-12-09 10:40:18.443916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.961 [2024-12-09 10:40:18.444943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.220 [2024-12-09 10:40:18.789366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.220 [2024-12-09 10:40:18.821992] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:34.220 [2024-12-09 10:40:18.822381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2144884 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2144884 /var/tmp/bdevperf.sock 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2144884 ']' 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.157 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:29:35.157 "subsystems": [ 00:29:35.157 { 00:29:35.157 "subsystem": "keyring", 00:29:35.157 "config": [ 00:29:35.157 { 00:29:35.157 "method": "keyring_file_add_key", 00:29:35.157 "params": { 00:29:35.157 "name": "key0", 00:29:35.157 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:35.157 } 00:29:35.157 } 00:29:35.157 ] 00:29:35.157 }, 00:29:35.157 { 00:29:35.157 "subsystem": "iobuf", 00:29:35.157 "config": [ 00:29:35.157 { 00:29:35.157 "method": "iobuf_set_options", 00:29:35.157 "params": { 00:29:35.157 "small_pool_count": 8192, 00:29:35.157 "large_pool_count": 1024, 00:29:35.157 "small_bufsize": 8192, 00:29:35.157 "large_bufsize": 135168, 00:29:35.157 "enable_numa": false 00:29:35.157 } 00:29:35.157 } 00:29:35.157 ] 00:29:35.157 }, 00:29:35.157 { 00:29:35.157 "subsystem": "sock", 00:29:35.157 "config": [ 00:29:35.157 { 00:29:35.157 "method": "sock_set_default_impl", 00:29:35.157 "params": { 00:29:35.157 "impl_name": "posix" 00:29:35.157 } 00:29:35.157 }, 00:29:35.157 { 00:29:35.157 "method": "sock_impl_set_options", 00:29:35.158 "params": { 00:29:35.158 "impl_name": "ssl", 00:29:35.158 "recv_buf_size": 4096, 00:29:35.158 "send_buf_size": 4096, 00:29:35.158 "enable_recv_pipe": true, 00:29:35.158 "enable_quickack": false, 00:29:35.158 "enable_placement_id": 0, 00:29:35.158 "enable_zerocopy_send_server": true, 00:29:35.158 "enable_zerocopy_send_client": false, 00:29:35.158 "zerocopy_threshold": 0, 00:29:35.158 "tls_version": 0, 00:29:35.158 "enable_ktls": false 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "sock_impl_set_options", 00:29:35.158 "params": { 00:29:35.158 "impl_name": "posix", 00:29:35.158 "recv_buf_size": 2097152, 00:29:35.158 "send_buf_size": 2097152, 00:29:35.158 "enable_recv_pipe": true, 00:29:35.158 "enable_quickack": false, 00:29:35.158 "enable_placement_id": 0, 00:29:35.158 "enable_zerocopy_send_server": true, 00:29:35.158 "enable_zerocopy_send_client": false, 00:29:35.158 "zerocopy_threshold": 0, 00:29:35.158 "tls_version": 0, 00:29:35.158 "enable_ktls": false 00:29:35.158 } 00:29:35.158 } 00:29:35.158 ] 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "subsystem": "vmd", 00:29:35.158 "config": [] 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "subsystem": "accel", 00:29:35.158 "config": [ 00:29:35.158 { 00:29:35.158 "method": "accel_set_options", 00:29:35.158 "params": { 00:29:35.158 "small_cache_size": 128, 00:29:35.158 "large_cache_size": 16, 00:29:35.158 "task_count": 2048, 00:29:35.158 "sequence_count": 2048, 00:29:35.158 "buf_count": 2048 00:29:35.158 } 00:29:35.158 } 00:29:35.158 ] 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "subsystem": "bdev", 00:29:35.158 "config": [ 00:29:35.158 { 00:29:35.158 "method": "bdev_set_options", 00:29:35.158 "params": { 00:29:35.158 "bdev_io_pool_size": 65535, 00:29:35.158 "bdev_io_cache_size": 256, 00:29:35.158 "bdev_auto_examine": true, 00:29:35.158 "iobuf_small_cache_size": 128, 00:29:35.158 "iobuf_large_cache_size": 16 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_raid_set_options", 00:29:35.158 "params": { 00:29:35.158 "process_window_size_kb": 1024, 00:29:35.158 "process_max_bandwidth_mb_sec": 0 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_iscsi_set_options", 00:29:35.158 "params": { 00:29:35.158 "timeout_sec": 30 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_nvme_set_options", 00:29:35.158 "params": { 00:29:35.158 "action_on_timeout": "none", 00:29:35.158 "timeout_us": 0, 00:29:35.158 "timeout_admin_us": 0, 00:29:35.158 "keep_alive_timeout_ms": 10000, 00:29:35.158 "arbitration_burst": 0, 00:29:35.158 "low_priority_weight": 0, 00:29:35.158 "medium_priority_weight": 0, 00:29:35.158 "high_priority_weight": 0, 00:29:35.158 "nvme_adminq_poll_period_us": 10000, 00:29:35.158 "nvme_ioq_poll_period_us": 0, 00:29:35.158 "io_queue_requests": 512, 00:29:35.158 "delay_cmd_submit": true, 00:29:35.158 "transport_retry_count": 4, 00:29:35.158 "bdev_retry_count": 3, 00:29:35.158 "transport_ack_timeout": 0, 00:29:35.158 "ctrlr_loss_timeout_sec": 0, 00:29:35.158 "reconnect_delay_sec": 0, 00:29:35.158 "fast_io_fail_timeout_sec": 0, 00:29:35.158 "disable_auto_failback": false, 00:29:35.158 "generate_uuids": false, 00:29:35.158 "transport_tos": 0, 00:29:35.158 "nvme_error_stat": false, 00:29:35.158 "rdma_srq_size": 0, 00:29:35.158 "io_path_stat": false, 00:29:35.158 "allow_accel_sequence": false, 00:29:35.158 "rdma_max_cq_size": 0, 00:29:35.158 "rdma_cm_event_timeout_ms": 0, 00:29:35.158 "dhchap_digests": [ 00:29:35.158 "sha256", 00:29:35.158 "sha384", 00:29:35.158 "sha512" 00:29:35.158 ], 00:29:35.158 "dhchap_dhgroups": [ 00:29:35.158 "null", 00:29:35.158 "ffdhe2048", 00:29:35.158 "ffdhe3072", 00:29:35.158 "ffdhe4096", 00:29:35.158 "ffdhe6144", 00:29:35.158 "ffdhe8192" 00:29:35.158 ] 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_nvme_attach_controller", 00:29:35.158 "params": { 00:29:35.158 "name": "TLSTEST", 00:29:35.158 "trtype": "TCP", 00:29:35.158 "adrfam": "IPv4", 00:29:35.158 "traddr": "10.0.0.2", 00:29:35.158 "trsvcid": "4420", 00:29:35.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.158 "prchk_reftag": false, 00:29:35.158 "prchk_guard": false, 00:29:35.158 "ctrlr_loss_timeout_sec": 0, 00:29:35.158 "reconnect_delay_sec": 0, 00:29:35.158 "fast_io_fail_timeout_sec": 0, 00:29:35.158 "psk": "key0", 00:29:35.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.158 "hdgst": false, 00:29:35.158 "ddgst": false, 00:29:35.158 "multipath": "multipath" 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_nvme_set_hotplug", 00:29:35.158 "params": { 00:29:35.158 "period_us": 100000, 00:29:35.158 "enable": false 00:29:35.158 } 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "method": "bdev_wait_for_examine" 00:29:35.158 } 00:29:35.158 ] 00:29:35.158 }, 00:29:35.158 { 00:29:35.158 "subsystem": "nbd", 00:29:35.158 "config": [] 00:29:35.158 } 00:29:35.158 ] 00:29:35.158 }' 00:29:35.158 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:35.158 [2024-12-09 10:40:19.596770] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:35.158 [2024-12-09 10:40:19.596946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144884 ] 00:29:35.158 [2024-12-09 10:40:19.810076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.415 [2024-12-09 10:40:19.963847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.676 [2024-12-09 10:40:20.246092] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:35.935 10:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.935 10:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:35.935 10:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:36.194 Running I/O for 10 seconds... 00:29:38.516 1474.00 IOPS, 5.76 MiB/s [2024-12-09T09:40:24.112Z] 1481.50 IOPS, 5.79 MiB/s [2024-12-09T09:40:25.054Z] 1487.33 IOPS, 5.81 MiB/s [2024-12-09T09:40:25.991Z] 1491.50 IOPS, 5.83 MiB/s [2024-12-09T09:40:26.928Z] 1485.00 IOPS, 5.80 MiB/s [2024-12-09T09:40:27.866Z] 1484.50 IOPS, 5.80 MiB/s [2024-12-09T09:40:28.804Z] 1484.43 IOPS, 5.80 MiB/s [2024-12-09T09:40:30.252Z] 1554.38 IOPS, 6.07 MiB/s [2024-12-09T09:40:30.823Z] 1646.89 IOPS, 6.43 MiB/s [2024-12-09T09:40:31.082Z] 1628.80 IOPS, 6.36 MiB/s 00:29:46.428 Latency(us) 00:29:46.428 [2024-12-09T09:40:31.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.428 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:46.428 Verification LBA range: start 0x0 length 0x2000 00:29:46.428 TLSTESTn1 : 10.05 1633.80 6.38 0.00 0.00 78125.18 13883.92 63302.92 00:29:46.428 [2024-12-09T09:40:31.082Z] =================================================================================================================== 00:29:46.428 [2024-12-09T09:40:31.082Z] Total : 1633.80 6.38 0.00 0.00 78125.18 13883.92 63302.92 00:29:46.428 { 00:29:46.428 "results": [ 00:29:46.428 { 00:29:46.428 "job": "TLSTESTn1", 00:29:46.428 "core_mask": "0x4", 00:29:46.428 "workload": "verify", 00:29:46.428 "status": "finished", 00:29:46.428 "verify_range": { 00:29:46.428 "start": 0, 00:29:46.428 "length": 8192 00:29:46.428 }, 00:29:46.428 "queue_depth": 128, 00:29:46.428 "io_size": 4096, 00:29:46.428 "runtime": 10.04776, 00:29:46.428 "iops": 1633.7969855967897, 00:29:46.428 "mibps": 6.38201947498746, 00:29:46.428 "io_failed": 0, 00:29:46.428 "io_timeout": 0, 00:29:46.428 "avg_latency_us": 78125.17768825355, 00:29:46.428 "min_latency_us": 13883.922962962963, 00:29:46.428 "max_latency_us": 63302.921481481484 00:29:46.428 } 00:29:46.428 ], 00:29:46.428 "core_count": 1 00:29:46.428 } 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2144884 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2144884 ']' 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2144884 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144884 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144884' 00:29:46.428 killing process with pid 2144884 00:29:46.428 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2144884 00:29:46.428 Received shutdown signal, test time was about 10.000000 seconds 00:29:46.428 00:29:46.428 Latency(us) 00:29:46.428 [2024-12-09T09:40:31.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.428 [2024-12-09T09:40:31.082Z] =================================================================================================================== 00:29:46.428 [2024-12-09T09:40:31.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.429 10:40:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2144884 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2144615 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2144615 ']' 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2144615 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144615 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144615' 00:29:46.689 killing process with pid 2144615 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2144615 00:29:46.689 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2144615 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2146210 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2146210 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2146210 ']' 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.262 10:40:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:47.262 [2024-12-09 10:40:31.845657] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:47.262 [2024-12-09 10:40:31.845782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.524 [2024-12-09 10:40:31.992765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.524 [2024-12-09 10:40:32.110171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.524 [2024-12-09 10:40:32.110288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.524 [2024-12-09 10:40:32.110324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.524 [2024-12-09 10:40:32.110362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.524 [2024-12-09 10:40:32.110374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.524 [2024-12-09 10:40:32.111176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Qdhfn96uAt 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qdhfn96uAt 00:29:48.096 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:48.356 [2024-12-09 10:40:32.927598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.356 10:40:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:48.927 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:49.499 [2024-12-09 10:40:34.047245] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:49.499 [2024-12-09 10:40:34.047664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.499 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:50.439 malloc0 00:29:50.439 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:50.698 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:50.957 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2146718 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2146718 /var/tmp/bdevperf.sock 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2146718 ']' 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.216 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:51.475 [2024-12-09 10:40:35.923749] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:51.475 [2024-12-09 10:40:35.923862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146718 ] 00:29:51.475 [2024-12-09 10:40:36.051282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.735 [2024-12-09 10:40:36.168878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.735 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.735 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:51.735 10:40:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:52.674 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:29:53.245 [2024-12-09 10:40:37.718659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:53.245 nvme0n1 00:29:53.245 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:53.506 Running I/O for 1 seconds... 00:29:54.702 1440.00 IOPS, 5.62 MiB/s 00:29:54.702 Latency(us) 00:29:54.702 [2024-12-09T09:40:39.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.702 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:54.702 Verification LBA range: start 0x0 length 0x2000 00:29:54.702 nvme0n1 : 1.04 1512.74 5.91 0.00 0.00 83177.00 8738.13 59419.31 00:29:54.702 [2024-12-09T09:40:39.356Z] =================================================================================================================== 00:29:54.702 [2024-12-09T09:40:39.356Z] Total : 1512.74 5.91 0.00 0.00 83177.00 8738.13 59419.31 00:29:54.702 { 00:29:54.702 "results": [ 00:29:54.702 { 00:29:54.702 "job": "nvme0n1", 00:29:54.702 "core_mask": "0x2", 00:29:54.702 "workload": "verify", 00:29:54.702 "status": "finished", 00:29:54.702 "verify_range": { 00:29:54.702 "start": 0, 00:29:54.702 "length": 8192 00:29:54.702 }, 00:29:54.702 "queue_depth": 128, 00:29:54.702 "io_size": 4096, 00:29:54.702 "runtime": 1.036528, 00:29:54.702 "iops": 1512.742540481299, 00:29:54.702 "mibps": 5.909150548755075, 00:29:54.702 "io_failed": 0, 00:29:54.702 "io_timeout": 0, 00:29:54.702 "avg_latency_us": 83177.00450491309, 00:29:54.702 "min_latency_us": 8738.133333333333, 00:29:54.702 "max_latency_us": 59419.306666666664 00:29:54.702 } 00:29:54.702 ], 00:29:54.702 "core_count": 1 00:29:54.702 } 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2146718 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2146718 ']' 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2146718 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146718 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146718' 00:29:54.702 killing process with pid 2146718 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2146718 00:29:54.702 Received shutdown signal, test time was about 1.000000 seconds 00:29:54.702 00:29:54.702 Latency(us) 00:29:54.702 [2024-12-09T09:40:39.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.702 [2024-12-09T09:40:39.356Z] =================================================================================================================== 00:29:54.702 [2024-12-09T09:40:39.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.702 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2146718 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2146210 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2146210 ']' 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2146210 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146210 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146210' 00:29:54.960 killing process with pid 2146210 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2146210 00:29:54.960 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2146210 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2147160 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2147160 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2147160 ']' 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.528 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:55.528 [2024-12-09 10:40:40.031073] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:55.528 [2024-12-09 10:40:40.031178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.528 [2024-12-09 10:40:40.171607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.789 [2024-12-09 10:40:40.294661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.789 [2024-12-09 10:40:40.294807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.789 [2024-12-09 10:40:40.294854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.789 [2024-12-09 10:40:40.294887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.789 [2024-12-09 10:40:40.294899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.789 [2024-12-09 10:40:40.295667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:56.050 [2024-12-09 10:40:40.558842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.050 malloc0 00:29:56.050 [2024-12-09 10:40:40.599393] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:56.050 [2024-12-09 10:40:40.599800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2147303 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2147303 /var/tmp/bdevperf.sock 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2147303 ']' 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:56.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.050 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:56.311 [2024-12-09 10:40:40.718059] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:29:56.311 [2024-12-09 10:40:40.718237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147303 ] 00:29:56.311 [2024-12-09 10:40:40.892153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.571 [2024-12-09 10:40:41.013172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.831 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.831 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:29:56.831 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qdhfn96uAt 00:29:57.402 10:40:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:29:57.972 [2024-12-09 10:40:42.345243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:57.972 nvme0n1 00:29:57.972 10:40:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:58.233 Running I/O for 1 seconds... 00:29:59.171 2226.00 IOPS, 8.70 MiB/s 00:29:59.171 Latency(us) 00:29:59.171 [2024-12-09T09:40:43.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.171 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:59.171 Verification LBA range: start 0x0 length 0x2000 00:29:59.171 nvme0n1 : 1.02 2303.99 9.00 0.00 0.00 54889.34 7767.23 60584.39 00:29:59.171 [2024-12-09T09:40:43.825Z] =================================================================================================================== 00:29:59.171 [2024-12-09T09:40:43.825Z] Total : 2303.99 9.00 0.00 0.00 54889.34 7767.23 60584.39 00:29:59.171 { 00:29:59.171 "results": [ 00:29:59.171 { 00:29:59.171 "job": "nvme0n1", 00:29:59.171 "core_mask": "0x2", 00:29:59.171 "workload": "verify", 00:29:59.171 "status": "finished", 00:29:59.171 "verify_range": { 00:29:59.171 "start": 0, 00:29:59.171 "length": 8192 00:29:59.171 }, 00:29:59.171 "queue_depth": 128, 00:29:59.171 "io_size": 4096, 00:29:59.171 "runtime": 1.021707, 00:29:59.171 "iops": 2303.987346665923, 00:29:59.171 "mibps": 8.999950572913761, 00:29:59.171 "io_failed": 0, 00:29:59.171 "io_timeout": 0, 00:29:59.171 "avg_latency_us": 54889.34385097076, 00:29:59.171 "min_latency_us": 7767.22962962963, 00:29:59.171 "max_latency_us": 60584.39111111111 00:29:59.171 } 00:29:59.171 ], 00:29:59.171 "core_count": 1 00:29:59.171 } 00:29:59.171 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:29:59.171 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.171 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:59.171 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.171 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:29:59.171 "subsystems": [ 00:29:59.171 { 00:29:59.171 "subsystem": "keyring", 00:29:59.171 "config": [ 00:29:59.171 { 00:29:59.171 "method": "keyring_file_add_key", 00:29:59.171 "params": { 00:29:59.171 "name": "key0", 00:29:59.171 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:59.171 } 00:29:59.171 } 00:29:59.171 ] 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "subsystem": "iobuf", 00:29:59.171 "config": [ 00:29:59.171 { 00:29:59.171 "method": "iobuf_set_options", 00:29:59.171 "params": { 00:29:59.171 "small_pool_count": 8192, 00:29:59.171 "large_pool_count": 1024, 00:29:59.171 "small_bufsize": 8192, 00:29:59.171 "large_bufsize": 135168, 00:29:59.171 "enable_numa": false 00:29:59.171 } 00:29:59.171 } 00:29:59.171 ] 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "subsystem": "sock", 00:29:59.171 "config": [ 00:29:59.171 { 00:29:59.171 "method": "sock_set_default_impl", 00:29:59.171 "params": { 00:29:59.171 "impl_name": "posix" 00:29:59.171 } 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "method": "sock_impl_set_options", 00:29:59.171 "params": { 00:29:59.171 "impl_name": "ssl", 00:29:59.171 "recv_buf_size": 4096, 00:29:59.171 "send_buf_size": 4096, 00:29:59.171 "enable_recv_pipe": true, 00:29:59.171 "enable_quickack": false, 00:29:59.171 "enable_placement_id": 0, 00:29:59.171 "enable_zerocopy_send_server": true, 00:29:59.171 "enable_zerocopy_send_client": false, 00:29:59.171 "zerocopy_threshold": 0, 00:29:59.171 "tls_version": 0, 00:29:59.171 "enable_ktls": false 00:29:59.171 } 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "method": "sock_impl_set_options", 00:29:59.171 "params": { 00:29:59.171 "impl_name": "posix", 00:29:59.171 "recv_buf_size": 2097152, 00:29:59.171 "send_buf_size": 2097152, 00:29:59.171 "enable_recv_pipe": true, 00:29:59.171 "enable_quickack": false, 00:29:59.171 "enable_placement_id": 0, 00:29:59.171 "enable_zerocopy_send_server": true, 00:29:59.171 "enable_zerocopy_send_client": false, 00:29:59.171 "zerocopy_threshold": 0, 00:29:59.171 "tls_version": 0, 00:29:59.171 "enable_ktls": false 00:29:59.171 } 00:29:59.171 } 00:29:59.171 ] 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "subsystem": "vmd", 00:29:59.171 "config": [] 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "subsystem": "accel", 00:29:59.171 "config": [ 00:29:59.171 { 00:29:59.171 "method": "accel_set_options", 00:29:59.171 "params": { 00:29:59.171 "small_cache_size": 128, 00:29:59.171 "large_cache_size": 16, 00:29:59.171 "task_count": 2048, 00:29:59.171 "sequence_count": 2048, 00:29:59.171 "buf_count": 2048 00:29:59.171 } 00:29:59.171 } 00:29:59.171 ] 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "subsystem": "bdev", 00:29:59.171 "config": [ 00:29:59.171 { 00:29:59.171 "method": "bdev_set_options", 00:29:59.171 "params": { 00:29:59.171 "bdev_io_pool_size": 65535, 00:29:59.171 "bdev_io_cache_size": 256, 00:29:59.171 "bdev_auto_examine": true, 00:29:59.171 "iobuf_small_cache_size": 128, 00:29:59.171 "iobuf_large_cache_size": 16 00:29:59.171 } 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "method": "bdev_raid_set_options", 00:29:59.171 "params": { 00:29:59.171 "process_window_size_kb": 1024, 00:29:59.171 "process_max_bandwidth_mb_sec": 0 00:29:59.171 } 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "method": "bdev_iscsi_set_options", 00:29:59.171 "params": { 00:29:59.171 "timeout_sec": 30 00:29:59.171 } 00:29:59.171 }, 00:29:59.171 { 00:29:59.171 "method": "bdev_nvme_set_options", 00:29:59.171 "params": { 00:29:59.171 "action_on_timeout": "none", 00:29:59.171 "timeout_us": 0, 00:29:59.171 "timeout_admin_us": 0, 00:29:59.171 "keep_alive_timeout_ms": 10000, 00:29:59.171 "arbitration_burst": 0, 00:29:59.171 "low_priority_weight": 0, 00:29:59.171 "medium_priority_weight": 0, 00:29:59.171 "high_priority_weight": 0, 00:29:59.171 "nvme_adminq_poll_period_us": 10000, 00:29:59.171 "nvme_ioq_poll_period_us": 0, 00:29:59.171 "io_queue_requests": 0, 00:29:59.171 "delay_cmd_submit": true, 00:29:59.171 "transport_retry_count": 4, 00:29:59.172 "bdev_retry_count": 3, 00:29:59.172 "transport_ack_timeout": 0, 00:29:59.172 "ctrlr_loss_timeout_sec": 0, 00:29:59.172 "reconnect_delay_sec": 0, 00:29:59.172 "fast_io_fail_timeout_sec": 0, 00:29:59.172 "disable_auto_failback": false, 00:29:59.172 "generate_uuids": false, 00:29:59.172 "transport_tos": 0, 00:29:59.172 "nvme_error_stat": false, 00:29:59.172 "rdma_srq_size": 0, 00:29:59.172 "io_path_stat": false, 00:29:59.172 "allow_accel_sequence": false, 00:29:59.172 "rdma_max_cq_size": 0, 00:29:59.172 "rdma_cm_event_timeout_ms": 0, 00:29:59.172 "dhchap_digests": [ 00:29:59.172 "sha256", 00:29:59.172 "sha384", 00:29:59.172 "sha512" 00:29:59.172 ], 00:29:59.172 "dhchap_dhgroups": [ 00:29:59.172 "null", 00:29:59.172 "ffdhe2048", 00:29:59.172 "ffdhe3072", 00:29:59.172 "ffdhe4096", 00:29:59.172 "ffdhe6144", 00:29:59.172 "ffdhe8192" 00:29:59.172 ] 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "bdev_nvme_set_hotplug", 00:29:59.172 "params": { 00:29:59.172 "period_us": 100000, 00:29:59.172 "enable": false 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "bdev_malloc_create", 00:29:59.172 "params": { 00:29:59.172 "name": "malloc0", 00:29:59.172 "num_blocks": 8192, 00:29:59.172 "block_size": 4096, 00:29:59.172 "physical_block_size": 4096, 00:29:59.172 "uuid": "8ef760a7-01b2-4f05-a092-3f0611038ab1", 00:29:59.172 "optimal_io_boundary": 0, 00:29:59.172 "md_size": 0, 00:29:59.172 "dif_type": 0, 00:29:59.172 "dif_is_head_of_md": false, 00:29:59.172 "dif_pi_format": 0 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "bdev_wait_for_examine" 00:29:59.172 } 00:29:59.172 ] 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "subsystem": "nbd", 00:29:59.172 "config": [] 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "subsystem": "scheduler", 00:29:59.172 "config": [ 00:29:59.172 { 00:29:59.172 "method": "framework_set_scheduler", 00:29:59.172 "params": { 00:29:59.172 "name": "static" 00:29:59.172 } 00:29:59.172 } 00:29:59.172 ] 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "subsystem": "nvmf", 00:29:59.172 "config": [ 00:29:59.172 { 00:29:59.172 "method": "nvmf_set_config", 00:29:59.172 "params": { 00:29:59.172 "discovery_filter": "match_any", 00:29:59.172 "admin_cmd_passthru": { 00:29:59.172 "identify_ctrlr": false 00:29:59.172 }, 00:29:59.172 "dhchap_digests": [ 00:29:59.172 "sha256", 00:29:59.172 "sha384", 00:29:59.172 "sha512" 00:29:59.172 ], 00:29:59.172 "dhchap_dhgroups": [ 00:29:59.172 "null", 00:29:59.172 "ffdhe2048", 00:29:59.172 "ffdhe3072", 00:29:59.172 "ffdhe4096", 00:29:59.172 "ffdhe6144", 00:29:59.172 "ffdhe8192" 00:29:59.172 ] 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_set_max_subsystems", 00:29:59.172 "params": { 00:29:59.172 "max_subsystems": 1024 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_set_crdt", 00:29:59.172 "params": { 00:29:59.172 "crdt1": 0, 00:29:59.172 "crdt2": 0, 00:29:59.172 "crdt3": 0 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_create_transport", 00:29:59.172 "params": { 00:29:59.172 "trtype": "TCP", 00:29:59.172 "max_queue_depth": 128, 00:29:59.172 "max_io_qpairs_per_ctrlr": 127, 00:29:59.172 "in_capsule_data_size": 4096, 00:29:59.172 "max_io_size": 131072, 00:29:59.172 "io_unit_size": 131072, 00:29:59.172 "max_aq_depth": 128, 00:29:59.172 "num_shared_buffers": 511, 00:29:59.172 "buf_cache_size": 4294967295, 00:29:59.172 "dif_insert_or_strip": false, 00:29:59.172 "zcopy": false, 00:29:59.172 "c2h_success": false, 00:29:59.172 "sock_priority": 0, 00:29:59.172 "abort_timeout_sec": 1, 00:29:59.172 "ack_timeout": 0, 00:29:59.172 "data_wr_pool_size": 0 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_create_subsystem", 00:29:59.172 "params": { 00:29:59.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.172 "allow_any_host": false, 00:29:59.172 "serial_number": "00000000000000000000", 00:29:59.172 "model_number": "SPDK bdev Controller", 00:29:59.172 "max_namespaces": 32, 00:29:59.172 "min_cntlid": 1, 00:29:59.172 "max_cntlid": 65519, 00:29:59.172 "ana_reporting": false 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_subsystem_add_host", 00:29:59.172 "params": { 00:29:59.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.172 "host": "nqn.2016-06.io.spdk:host1", 00:29:59.172 "psk": "key0" 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_subsystem_add_ns", 00:29:59.172 "params": { 00:29:59.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.172 "namespace": { 00:29:59.172 "nsid": 1, 00:29:59.172 "bdev_name": "malloc0", 00:29:59.172 "nguid": "8EF760A701B24F05A0923F0611038AB1", 00:29:59.172 "uuid": "8ef760a7-01b2-4f05-a092-3f0611038ab1", 00:29:59.172 "no_auto_visible": false 00:29:59.172 } 00:29:59.172 } 00:29:59.172 }, 00:29:59.172 { 00:29:59.172 "method": "nvmf_subsystem_add_listener", 00:29:59.172 "params": { 00:29:59.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.172 "listen_address": { 00:29:59.172 "trtype": "TCP", 00:29:59.172 "adrfam": "IPv4", 00:29:59.172 "traddr": "10.0.0.2", 00:29:59.172 "trsvcid": "4420" 00:29:59.172 }, 00:29:59.172 "secure_channel": false, 00:29:59.172 "sock_impl": "ssl" 00:29:59.172 } 00:29:59.172 } 00:29:59.172 ] 00:29:59.172 } 00:29:59.172 ] 00:29:59.172 }' 00:29:59.172 10:40:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:29:59.738 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:29:59.738 "subsystems": [ 00:29:59.738 { 00:29:59.738 "subsystem": "keyring", 00:29:59.738 "config": [ 00:29:59.738 { 00:29:59.738 "method": "keyring_file_add_key", 00:29:59.738 "params": { 00:29:59.738 "name": "key0", 00:29:59.738 "path": "/tmp/tmp.Qdhfn96uAt" 00:29:59.738 } 00:29:59.738 } 00:29:59.738 ] 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "subsystem": "iobuf", 00:29:59.738 "config": [ 00:29:59.738 { 00:29:59.738 "method": "iobuf_set_options", 00:29:59.738 "params": { 00:29:59.738 "small_pool_count": 8192, 00:29:59.738 "large_pool_count": 1024, 00:29:59.738 "small_bufsize": 8192, 00:29:59.738 "large_bufsize": 135168, 00:29:59.738 "enable_numa": false 00:29:59.738 } 00:29:59.738 } 00:29:59.738 ] 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "subsystem": "sock", 00:29:59.738 "config": [ 00:29:59.738 { 00:29:59.738 "method": "sock_set_default_impl", 00:29:59.738 "params": { 00:29:59.738 "impl_name": "posix" 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "method": "sock_impl_set_options", 00:29:59.738 "params": { 00:29:59.738 "impl_name": "ssl", 00:29:59.738 "recv_buf_size": 4096, 00:29:59.738 "send_buf_size": 4096, 00:29:59.738 "enable_recv_pipe": true, 00:29:59.738 "enable_quickack": false, 00:29:59.738 "enable_placement_id": 0, 00:29:59.738 "enable_zerocopy_send_server": true, 00:29:59.738 "enable_zerocopy_send_client": false, 00:29:59.738 "zerocopy_threshold": 0, 00:29:59.738 "tls_version": 0, 00:29:59.738 "enable_ktls": false 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "method": "sock_impl_set_options", 00:29:59.738 "params": { 00:29:59.738 "impl_name": "posix", 00:29:59.738 "recv_buf_size": 2097152, 00:29:59.738 "send_buf_size": 2097152, 00:29:59.738 "enable_recv_pipe": true, 00:29:59.738 "enable_quickack": false, 00:29:59.738 "enable_placement_id": 0, 00:29:59.738 "enable_zerocopy_send_server": true, 00:29:59.738 "enable_zerocopy_send_client": false, 00:29:59.738 "zerocopy_threshold": 0, 00:29:59.738 "tls_version": 0, 00:29:59.738 "enable_ktls": false 00:29:59.738 } 00:29:59.738 } 00:29:59.738 ] 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "subsystem": "vmd", 00:29:59.738 "config": [] 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "subsystem": "accel", 00:29:59.738 "config": [ 00:29:59.738 { 00:29:59.738 "method": "accel_set_options", 00:29:59.738 "params": { 00:29:59.738 "small_cache_size": 128, 00:29:59.738 "large_cache_size": 16, 00:29:59.738 "task_count": 2048, 00:29:59.738 "sequence_count": 2048, 00:29:59.738 "buf_count": 2048 00:29:59.738 } 00:29:59.738 } 00:29:59.738 ] 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "subsystem": "bdev", 00:29:59.738 "config": [ 00:29:59.738 { 00:29:59.738 "method": "bdev_set_options", 00:29:59.738 "params": { 00:29:59.738 "bdev_io_pool_size": 65535, 00:29:59.738 "bdev_io_cache_size": 256, 00:29:59.738 "bdev_auto_examine": true, 00:29:59.738 "iobuf_small_cache_size": 128, 00:29:59.738 "iobuf_large_cache_size": 16 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "method": "bdev_raid_set_options", 00:29:59.738 "params": { 00:29:59.738 "process_window_size_kb": 1024, 00:29:59.738 "process_max_bandwidth_mb_sec": 0 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "method": "bdev_iscsi_set_options", 00:29:59.738 "params": { 00:29:59.738 "timeout_sec": 30 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.738 "method": "bdev_nvme_set_options", 00:29:59.738 "params": { 00:29:59.738 "action_on_timeout": "none", 00:29:59.738 "timeout_us": 0, 00:29:59.738 "timeout_admin_us": 0, 00:29:59.738 "keep_alive_timeout_ms": 10000, 00:29:59.738 "arbitration_burst": 0, 00:29:59.738 "low_priority_weight": 0, 00:29:59.738 "medium_priority_weight": 0, 00:29:59.738 "high_priority_weight": 0, 00:29:59.738 "nvme_adminq_poll_period_us": 10000, 00:29:59.738 "nvme_ioq_poll_period_us": 0, 00:29:59.738 "io_queue_requests": 512, 00:29:59.738 "delay_cmd_submit": true, 00:29:59.738 "transport_retry_count": 4, 00:29:59.738 "bdev_retry_count": 3, 00:29:59.738 "transport_ack_timeout": 0, 00:29:59.738 "ctrlr_loss_timeout_sec": 0, 00:29:59.738 "reconnect_delay_sec": 0, 00:29:59.738 "fast_io_fail_timeout_sec": 0, 00:29:59.738 "disable_auto_failback": false, 00:29:59.738 "generate_uuids": false, 00:29:59.738 "transport_tos": 0, 00:29:59.738 "nvme_error_stat": false, 00:29:59.738 "rdma_srq_size": 0, 00:29:59.738 "io_path_stat": false, 00:29:59.738 "allow_accel_sequence": false, 00:29:59.738 "rdma_max_cq_size": 0, 00:29:59.738 "rdma_cm_event_timeout_ms": 0, 00:29:59.738 "dhchap_digests": [ 00:29:59.738 "sha256", 00:29:59.738 "sha384", 00:29:59.738 "sha512" 00:29:59.738 ], 00:29:59.738 "dhchap_dhgroups": [ 00:29:59.738 "null", 00:29:59.738 "ffdhe2048", 00:29:59.738 "ffdhe3072", 00:29:59.738 "ffdhe4096", 00:29:59.738 "ffdhe6144", 00:29:59.738 "ffdhe8192" 00:29:59.738 ] 00:29:59.738 } 00:29:59.738 }, 00:29:59.738 { 00:29:59.739 "method": "bdev_nvme_attach_controller", 00:29:59.739 "params": { 00:29:59.739 "name": "nvme0", 00:29:59.739 "trtype": "TCP", 00:29:59.739 "adrfam": "IPv4", 00:29:59.739 "traddr": "10.0.0.2", 00:29:59.739 "trsvcid": "4420", 00:29:59.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.739 "prchk_reftag": false, 00:29:59.739 "prchk_guard": false, 00:29:59.739 "ctrlr_loss_timeout_sec": 0, 00:29:59.739 "reconnect_delay_sec": 0, 00:29:59.739 "fast_io_fail_timeout_sec": 0, 00:29:59.739 "psk": "key0", 00:29:59.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.739 "hdgst": false, 00:29:59.739 "ddgst": false, 00:29:59.739 "multipath": "multipath" 00:29:59.739 } 00:29:59.739 }, 00:29:59.739 { 00:29:59.739 "method": "bdev_nvme_set_hotplug", 00:29:59.739 "params": { 00:29:59.739 "period_us": 100000, 00:29:59.739 "enable": false 00:29:59.739 } 00:29:59.739 }, 00:29:59.739 { 00:29:59.739 "method": "bdev_enable_histogram", 00:29:59.739 "params": { 00:29:59.739 "name": "nvme0n1", 00:29:59.739 "enable": true 00:29:59.739 } 00:29:59.739 }, 00:29:59.739 { 00:29:59.739 "method": "bdev_wait_for_examine" 00:29:59.739 } 00:29:59.739 ] 00:29:59.739 }, 00:29:59.739 { 00:29:59.739 "subsystem": "nbd", 00:29:59.739 "config": [] 00:29:59.739 } 00:29:59.739 ] 00:29:59.739 }' 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2147303 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2147303 ']' 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2147303 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147303 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147303' 00:29:59.739 killing process with pid 2147303 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2147303 00:29:59.739 Received shutdown signal, test time was about 1.000000 seconds 00:29:59.739 00:29:59.739 Latency(us) 00:29:59.739 [2024-12-09T09:40:44.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.739 [2024-12-09T09:40:44.393Z] =================================================================================================================== 00:29:59.739 [2024-12-09T09:40:44.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.739 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2147303 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2147160 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2147160 ']' 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2147160 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147160 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147160' 00:30:00.322 killing process with pid 2147160 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2147160 00:30:00.322 10:40:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2147160 00:30:00.582 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:30:00.582 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.582 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.582 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:30:00.582 "subsystems": [ 00:30:00.582 { 00:30:00.582 "subsystem": "keyring", 00:30:00.582 "config": [ 00:30:00.582 { 00:30:00.582 "method": "keyring_file_add_key", 00:30:00.582 "params": { 00:30:00.582 "name": "key0", 00:30:00.582 "path": "/tmp/tmp.Qdhfn96uAt" 00:30:00.582 } 00:30:00.582 } 00:30:00.582 ] 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "subsystem": "iobuf", 00:30:00.582 "config": [ 00:30:00.582 { 00:30:00.582 "method": "iobuf_set_options", 00:30:00.582 "params": { 00:30:00.582 "small_pool_count": 8192, 00:30:00.582 "large_pool_count": 1024, 00:30:00.582 "small_bufsize": 8192, 00:30:00.582 "large_bufsize": 135168, 00:30:00.582 "enable_numa": false 00:30:00.582 } 00:30:00.582 } 00:30:00.582 ] 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "subsystem": "sock", 00:30:00.582 "config": [ 00:30:00.582 { 00:30:00.582 "method": "sock_set_default_impl", 00:30:00.582 "params": { 00:30:00.582 "impl_name": "posix" 00:30:00.582 } 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "method": "sock_impl_set_options", 00:30:00.582 "params": { 00:30:00.582 "impl_name": "ssl", 00:30:00.582 "recv_buf_size": 4096, 00:30:00.582 "send_buf_size": 4096, 00:30:00.582 "enable_recv_pipe": true, 00:30:00.582 "enable_quickack": false, 00:30:00.582 "enable_placement_id": 0, 00:30:00.582 "enable_zerocopy_send_server": true, 00:30:00.582 "enable_zerocopy_send_client": false, 00:30:00.582 "zerocopy_threshold": 0, 00:30:00.582 "tls_version": 0, 00:30:00.582 "enable_ktls": false 00:30:00.582 } 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "method": "sock_impl_set_options", 00:30:00.582 "params": { 00:30:00.582 "impl_name": "posix", 00:30:00.582 "recv_buf_size": 2097152, 00:30:00.582 "send_buf_size": 2097152, 00:30:00.582 "enable_recv_pipe": true, 00:30:00.582 "enable_quickack": false, 00:30:00.582 "enable_placement_id": 0, 00:30:00.582 "enable_zerocopy_send_server": true, 00:30:00.582 "enable_zerocopy_send_client": false, 00:30:00.582 "zerocopy_threshold": 0, 00:30:00.582 "tls_version": 0, 00:30:00.582 "enable_ktls": false 00:30:00.582 } 00:30:00.582 } 00:30:00.582 ] 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "subsystem": "vmd", 00:30:00.582 "config": [] 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "subsystem": "accel", 00:30:00.582 "config": [ 00:30:00.582 { 00:30:00.582 "method": "accel_set_options", 00:30:00.582 "params": { 00:30:00.582 "small_cache_size": 128, 00:30:00.582 "large_cache_size": 16, 00:30:00.582 "task_count": 2048, 00:30:00.582 "sequence_count": 2048, 00:30:00.582 "buf_count": 2048 00:30:00.582 } 00:30:00.582 } 00:30:00.582 ] 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "subsystem": "bdev", 00:30:00.582 "config": [ 00:30:00.582 { 00:30:00.582 "method": "bdev_set_options", 00:30:00.582 "params": { 00:30:00.582 "bdev_io_pool_size": 65535, 00:30:00.582 "bdev_io_cache_size": 256, 00:30:00.582 "bdev_auto_examine": true, 00:30:00.582 "iobuf_small_cache_size": 128, 00:30:00.582 "iobuf_large_cache_size": 16 00:30:00.582 } 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "method": "bdev_raid_set_options", 00:30:00.582 "params": { 00:30:00.582 "process_window_size_kb": 1024, 00:30:00.582 "process_max_bandwidth_mb_sec": 0 00:30:00.582 } 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "method": "bdev_iscsi_set_options", 00:30:00.582 "params": { 00:30:00.582 "timeout_sec": 30 00:30:00.582 } 00:30:00.582 }, 00:30:00.582 { 00:30:00.582 "method": "bdev_nvme_set_options", 00:30:00.582 "params": { 00:30:00.582 "action_on_timeout": "none", 00:30:00.582 "timeout_us": 0, 00:30:00.582 "timeout_admin_us": 0, 00:30:00.582 "keep_alive_timeout_ms": 10000, 00:30:00.582 "arbitration_burst": 0, 00:30:00.582 "low_priority_weight": 0, 00:30:00.582 "medium_priority_weight": 0, 00:30:00.582 "high_priority_weight": 0, 00:30:00.583 "nvme_adminq_poll_period_us": 10000, 00:30:00.583 "nvme_ioq_poll_period_us": 0, 00:30:00.583 "io_queue_requests": 0, 00:30:00.583 "delay_cmd_submit": true, 00:30:00.583 "transport_retry_count": 4, 00:30:00.583 "bdev_retry_count": 3, 00:30:00.583 "transport_ack_timeout": 0, 00:30:00.583 "ctrlr_loss_timeout_sec": 0, 00:30:00.583 "reconnect_delay_sec": 0, 00:30:00.583 "fast_io_fail_timeout_sec": 0, 00:30:00.583 "disable_auto_failback": false, 00:30:00.583 "generate_uuids": false, 00:30:00.583 "transport_tos": 0, 00:30:00.583 "nvme_error_stat": false, 00:30:00.583 "rdma_srq_size": 0, 00:30:00.583 "io_path_stat": false, 00:30:00.583 "allow_accel_sequence": false, 00:30:00.583 "rdma_max_cq_size": 0, 00:30:00.583 "rdma_cm_event_timeout_ms": 0, 00:30:00.583 "dhchap_digests": [ 00:30:00.583 "sha256", 00:30:00.583 "sha384", 00:30:00.583 "sha512" 00:30:00.583 ], 00:30:00.583 "dhchap_dhgroups": [ 00:30:00.583 "null", 00:30:00.583 "ffdhe2048", 00:30:00.583 "ffdhe3072", 00:30:00.583 "ffdhe4096", 00:30:00.583 "ffdhe6144", 00:30:00.583 "ffdhe8192" 00:30:00.583 ] 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "bdev_nvme_set_hotplug", 00:30:00.583 "params": { 00:30:00.583 "period_us": 100000, 00:30:00.583 "enable": false 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "bdev_malloc_create", 00:30:00.583 "params": { 00:30:00.583 "name": "malloc0", 00:30:00.583 "num_blocks": 8192, 00:30:00.583 "block_size": 4096, 00:30:00.583 "physical_block_size": 4096, 00:30:00.583 "uuid": "8ef760a7-01b2-4f05-a092-3f0611038ab1", 00:30:00.583 "optimal_io_boundary": 0, 00:30:00.583 "md_size": 0, 00:30:00.583 "dif_type": 0, 00:30:00.583 "dif_is_head_of_md": false, 00:30:00.583 "dif_pi_format": 0 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "bdev_wait_for_examine" 00:30:00.583 } 00:30:00.583 ] 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "subsystem": "nbd", 00:30:00.583 "config": [] 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "subsystem": "scheduler", 00:30:00.583 "config": [ 00:30:00.583 { 00:30:00.583 "method": "framework_set_scheduler", 00:30:00.583 "params": { 00:30:00.583 "name": "static" 00:30:00.583 } 00:30:00.583 } 00:30:00.583 ] 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "subsystem": "nvmf", 00:30:00.583 "config": [ 00:30:00.583 { 00:30:00.583 "method": "nvmf_set_config", 00:30:00.583 "params": { 00:30:00.583 "discovery_filter": "match_any", 00:30:00.583 "admin_cmd_passthru": { 00:30:00.583 "identify_ctrlr": false 00:30:00.583 }, 00:30:00.583 "dhchap_digests": [ 00:30:00.583 "sha256", 00:30:00.583 "sha384", 00:30:00.583 "sha512" 00:30:00.583 ], 00:30:00.583 "dhchap_dhgroups": [ 00:30:00.583 "null", 00:30:00.583 "ffdhe2048", 00:30:00.583 "ffdhe3072", 00:30:00.583 "ffdhe4096", 00:30:00.583 "ffdhe6144", 00:30:00.583 "ffdhe8192" 00:30:00.583 ] 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_set_max_subsystems", 00:30:00.583 "params": { 00:30:00.583 "max_subsystems": 1024 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_set_crdt", 00:30:00.583 "params": { 00:30:00.583 "crdt1": 0, 00:30:00.583 "crdt2": 0, 00:30:00.583 "crdt3": 0 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_create_transport", 00:30:00.583 "params": { 00:30:00.583 "trtype": "TCP", 00:30:00.583 "max_queue_depth": 128, 00:30:00.583 "max_io_qpairs_per_ctrlr": 127, 00:30:00.583 "in_capsule_data_size": 4096, 00:30:00.583 "max_io_size": 131072, 00:30:00.583 "io_unit_size": 131072, 00:30:00.583 "max_aq_depth": 128, 00:30:00.583 "num_shared_buffers": 511, 00:30:00.583 "buf_cache_size": 4294967295, 00:30:00.583 "dif_insert_or_strip": false, 00:30:00.583 "zcopy": false, 00:30:00.583 "c2h_success": false, 00:30:00.583 "sock_priority": 0, 00:30:00.583 "abort_timeout_sec": 1, 00:30:00.583 "ack_timeout": 0, 00:30:00.583 "data_wr_pool_size": 0 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_create_subsystem", 00:30:00.583 "params": { 00:30:00.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.583 "allow_any_host": false, 00:30:00.583 "serial_number": "00000000000000000000", 00:30:00.583 "model_number": "SPDK bdev Controller", 00:30:00.583 "max_namespaces": 32, 00:30:00.583 "min_cntlid": 1, 00:30:00.583 "max_cntlid": 65519, 00:30:00.583 "ana_reporting": false 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_subsystem_add_host", 00:30:00.583 "params": { 00:30:00.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.583 "host": "nqn.2016-06.io.spdk:host1", 00:30:00.583 "psk": "key0" 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_subsystem_add_ns", 00:30:00.583 "params": { 00:30:00.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.583 "namespace": { 00:30:00.583 "nsid": 1, 00:30:00.583 "bdev_name": "malloc0", 00:30:00.583 "nguid": "8EF760A701B24F05A0923F0611038AB1", 00:30:00.583 "uuid": "8ef760a7-01b2-4f05-a092-3f0611038ab1", 00:30:00.583 "no_auto_visible": false 00:30:00.583 } 00:30:00.583 } 00:30:00.583 }, 00:30:00.583 { 00:30:00.583 "method": "nvmf_subsystem_add_listener", 00:30:00.583 "params": { 00:30:00.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.583 "listen_address": { 00:30:00.583 "trtype": "TCP", 00:30:00.583 "adrfam": "IPv4", 00:30:00.583 "traddr": "10.0.0.2", 00:30:00.583 "trsvcid": "4420" 00:30:00.583 }, 00:30:00.583 "secure_channel": false, 00:30:00.583 "sock_impl": "ssl" 00:30:00.583 } 00:30:00.583 } 00:30:00.583 ] 00:30:00.583 } 00:30:00.583 ] 00:30:00.583 }' 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2147738 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2147738 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2147738 ']' 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.583 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:00.844 [2024-12-09 10:40:45.277052] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:00.844 [2024-12-09 10:40:45.277149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.844 [2024-12-09 10:40:45.417995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.105 [2024-12-09 10:40:45.534708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.105 [2024-12-09 10:40:45.534832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.105 [2024-12-09 10:40:45.534870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.105 [2024-12-09 10:40:45.534918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.105 [2024-12-09 10:40:45.534932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.105 [2024-12-09 10:40:45.535937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.364 [2024-12-09 10:40:45.873966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.364 [2024-12-09 10:40:45.906532] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:01.364 [2024-12-09 10:40:45.906922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2147873 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2147873 /var/tmp/bdevperf.sock 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2147873 ']' 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.364 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:30:01.364 "subsystems": [ 00:30:01.364 { 00:30:01.364 "subsystem": "keyring", 00:30:01.364 "config": [ 00:30:01.364 { 00:30:01.364 "method": "keyring_file_add_key", 00:30:01.364 "params": { 00:30:01.364 "name": "key0", 00:30:01.364 "path": "/tmp/tmp.Qdhfn96uAt" 00:30:01.364 } 00:30:01.364 } 00:30:01.364 ] 00:30:01.364 }, 00:30:01.364 { 00:30:01.364 "subsystem": "iobuf", 00:30:01.364 "config": [ 00:30:01.364 { 00:30:01.364 "method": "iobuf_set_options", 00:30:01.364 "params": { 00:30:01.364 "small_pool_count": 8192, 00:30:01.364 "large_pool_count": 1024, 00:30:01.364 "small_bufsize": 8192, 00:30:01.364 "large_bufsize": 135168, 00:30:01.364 "enable_numa": false 00:30:01.364 } 00:30:01.364 } 00:30:01.364 ] 00:30:01.364 }, 00:30:01.364 { 00:30:01.364 "subsystem": "sock", 00:30:01.364 "config": [ 00:30:01.364 { 00:30:01.364 "method": "sock_set_default_impl", 00:30:01.364 "params": { 00:30:01.364 "impl_name": "posix" 00:30:01.364 } 00:30:01.364 }, 00:30:01.364 { 00:30:01.364 "method": "sock_impl_set_options", 00:30:01.364 "params": { 00:30:01.364 "impl_name": "ssl", 00:30:01.364 "recv_buf_size": 4096, 00:30:01.364 "send_buf_size": 4096, 00:30:01.364 "enable_recv_pipe": true, 00:30:01.364 "enable_quickack": false, 00:30:01.365 "enable_placement_id": 0, 00:30:01.365 "enable_zerocopy_send_server": true, 00:30:01.365 "enable_zerocopy_send_client": false, 00:30:01.365 "zerocopy_threshold": 0, 00:30:01.365 "tls_version": 0, 00:30:01.365 "enable_ktls": false 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "sock_impl_set_options", 00:30:01.365 "params": { 00:30:01.365 "impl_name": "posix", 00:30:01.365 "recv_buf_size": 2097152, 00:30:01.365 "send_buf_size": 2097152, 00:30:01.365 "enable_recv_pipe": true, 00:30:01.365 "enable_quickack": false, 00:30:01.365 "enable_placement_id": 0, 00:30:01.365 "enable_zerocopy_send_server": true, 00:30:01.365 "enable_zerocopy_send_client": false, 00:30:01.365 "zerocopy_threshold": 0, 00:30:01.365 "tls_version": 0, 00:30:01.365 "enable_ktls": false 00:30:01.365 } 00:30:01.365 } 00:30:01.365 ] 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "subsystem": "vmd", 00:30:01.365 "config": [] 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "subsystem": "accel", 00:30:01.365 "config": [ 00:30:01.365 { 00:30:01.365 "method": "accel_set_options", 00:30:01.365 "params": { 00:30:01.365 "small_cache_size": 128, 00:30:01.365 "large_cache_size": 16, 00:30:01.365 "task_count": 2048, 00:30:01.365 "sequence_count": 2048, 00:30:01.365 "buf_count": 2048 00:30:01.365 } 00:30:01.365 } 00:30:01.365 ] 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "subsystem": "bdev", 00:30:01.365 "config": [ 00:30:01.365 { 00:30:01.365 "method": "bdev_set_options", 00:30:01.365 "params": { 00:30:01.365 "bdev_io_pool_size": 65535, 00:30:01.365 "bdev_io_cache_size": 256, 00:30:01.365 "bdev_auto_examine": true, 00:30:01.365 "iobuf_small_cache_size": 128, 00:30:01.365 "iobuf_large_cache_size": 16 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_raid_set_options", 00:30:01.365 "params": { 00:30:01.365 "process_window_size_kb": 1024, 00:30:01.365 "process_max_bandwidth_mb_sec": 0 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_iscsi_set_options", 00:30:01.365 "params": { 00:30:01.365 "timeout_sec": 30 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_nvme_set_options", 00:30:01.365 "params": { 00:30:01.365 "action_on_timeout": "none", 00:30:01.365 "timeout_us": 0, 00:30:01.365 "timeout_admin_us": 0, 00:30:01.365 "keep_alive_timeout_ms": 10000, 00:30:01.365 "arbitration_burst": 0, 00:30:01.365 "low_priority_weight": 0, 00:30:01.365 "medium_priority_weight": 0, 00:30:01.365 "high_priority_weight": 0, 00:30:01.365 "nvme_adminq_poll_period_us": 10000, 00:30:01.365 "nvme_ioq_poll_period_us": 0, 00:30:01.365 "io_queue_requests": 512, 00:30:01.365 "delay_cmd_submit": true, 00:30:01.365 "transport_retry_count": 4, 00:30:01.365 "bdev_retry_count": 3, 00:30:01.365 "transport_ack_timeout": 0, 00:30:01.365 "ctrlr_loss_timeout_sec": 0, 00:30:01.365 "reconnect_delay_sec": 0, 00:30:01.365 "fast_io_fail_timeout_sec": 0, 00:30:01.365 "disable_auto_failback": false, 00:30:01.365 "generate_uuids": false, 00:30:01.365 "transport_tos": 0, 00:30:01.365 "nvme_error_stat": false, 00:30:01.365 "rdma_srq_size": 0, 00:30:01.365 "io_path_stat": false, 00:30:01.365 "allow_accel_sequence": false, 00:30:01.365 "rdma_max_cq_size": 0, 00:30:01.365 "rdma_cm_event_timeout_ms": 0, 00:30:01.365 "dhchap_digests": [ 00:30:01.365 "sha256", 00:30:01.365 "sha384", 00:30:01.365 "sha512" 00:30:01.365 ], 00:30:01.365 "dhchap_dhgroups": [ 00:30:01.365 "null", 00:30:01.365 "ffdhe2048", 00:30:01.365 "ffdhe3072", 00:30:01.365 "ffdhe4096", 00:30:01.365 "ffdhe6144", 00:30:01.365 "ffdhe8192" 00:30:01.365 ] 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_nvme_attach_controller", 00:30:01.365 "params": { 00:30:01.365 "name": "nvme0", 00:30:01.365 "trtype": "TCP", 00:30:01.365 "adrfam": "IPv4", 00:30:01.365 "traddr": "10.0.0.2", 00:30:01.365 "trsvcid": "4420", 00:30:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.365 "prchk_reftag": false, 00:30:01.365 "prchk_guard": false, 00:30:01.365 "ctrlr_loss_timeout_sec": 0, 00:30:01.365 "reconnect_delay_sec": 0, 00:30:01.365 "fast_io_fail_timeout_sec": 0, 00:30:01.365 "psk": "key0", 00:30:01.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.365 "hdgst": false, 00:30:01.365 "ddgst": false, 00:30:01.365 "multipath": "multipath" 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_nvme_set_hotplug", 00:30:01.365 "params": { 00:30:01.365 "period_us": 100000, 00:30:01.365 "enable": false 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_enable_histogram", 00:30:01.365 "params": { 00:30:01.365 "name": "nvme0n1", 00:30:01.365 "enable": true 00:30:01.365 } 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "method": "bdev_wait_for_examine" 00:30:01.365 } 00:30:01.365 ] 00:30:01.365 }, 00:30:01.365 { 00:30:01.365 "subsystem": "nbd", 00:30:01.365 "config": [] 00:30:01.365 } 00:30:01.365 ] 00:30:01.365 }' 00:30:01.365 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.365 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.625 [2024-12-09 10:40:46.091746] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:01.625 [2024-12-09 10:40:46.091937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147873 ] 00:30:01.625 [2024-12-09 10:40:46.235483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.884 [2024-12-09 10:40:46.340122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.144 [2024-12-09 10:40:46.584525] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:02.144 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.144 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:30:02.144 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.144 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:30:03.082 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.082 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.082 Running I/O for 1 seconds... 00:30:04.023 1458.00 IOPS, 5.70 MiB/s 00:30:04.023 Latency(us) 00:30:04.023 [2024-12-09T09:40:48.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.023 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:04.023 Verification LBA range: start 0x0 length 0x2000 00:30:04.023 nvme0n1 : 1.05 1516.60 5.92 0.00 0.00 82631.04 12524.66 61749.48 00:30:04.023 [2024-12-09T09:40:48.677Z] =================================================================================================================== 00:30:04.023 [2024-12-09T09:40:48.677Z] Total : 1516.60 5.92 0.00 0.00 82631.04 12524.66 61749.48 00:30:04.023 { 00:30:04.023 "results": [ 00:30:04.023 { 00:30:04.023 "job": "nvme0n1", 00:30:04.023 "core_mask": "0x2", 00:30:04.023 "workload": "verify", 00:30:04.023 "status": "finished", 00:30:04.023 "verify_range": { 00:30:04.023 "start": 0, 00:30:04.023 "length": 8192 00:30:04.023 }, 00:30:04.023 "queue_depth": 128, 00:30:04.023 "io_size": 4096, 00:30:04.023 "runtime": 1.045758, 00:30:04.023 "iops": 1516.603267677608, 00:30:04.023 "mibps": 5.9242315143656565, 00:30:04.023 "io_failed": 0, 00:30:04.023 "io_timeout": 0, 00:30:04.023 "avg_latency_us": 82631.03698099108, 00:30:04.023 "min_latency_us": 12524.657777777778, 00:30:04.023 "max_latency_us": 61749.47555555555 00:30:04.023 } 00:30:04.023 ], 00:30:04.023 "core_count": 1 00:30:04.023 } 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:04.023 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:04.023 nvmf_trace.0 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2147873 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2147873 ']' 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2147873 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147873 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147873' 00:30:04.283 killing process with pid 2147873 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2147873 00:30:04.283 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.283 00:30:04.283 Latency(us) 00:30:04.283 [2024-12-09T09:40:48.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.283 [2024-12-09T09:40:48.937Z] =================================================================================================================== 00:30:04.283 [2024-12-09T09:40:48.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.283 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2147873 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.544 rmmod nvme_tcp 00:30:04.544 rmmod nvme_fabrics 00:30:04.544 rmmod nvme_keyring 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2147738 ']' 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2147738 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2147738 ']' 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2147738 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.544 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147738 00:30:04.803 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.803 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.803 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147738' 00:30:04.803 killing process with pid 2147738 00:30:04.803 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2147738 00:30:04.803 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2147738 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.064 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gmrKDoyOIe /tmp/tmp.XHrmc1PpF6 /tmp/tmp.Qdhfn96uAt 00:30:07.604 00:30:07.604 real 1m50.455s 00:30:07.604 user 3m11.593s 00:30:07.604 sys 0m32.243s 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:07.604 ************************************ 00:30:07.604 END TEST nvmf_tls 00:30:07.604 ************************************ 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:07.604 ************************************ 00:30:07.604 START TEST nvmf_fips 00:30:07.604 ************************************ 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:07.604 * Looking for test storage... 00:30:07.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:30:07.604 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:30:07.604 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.605 --rc genhtml_branch_coverage=1 00:30:07.605 --rc genhtml_function_coverage=1 00:30:07.605 --rc genhtml_legend=1 00:30:07.605 --rc geninfo_all_blocks=1 00:30:07.605 --rc geninfo_unexecuted_blocks=1 00:30:07.605 00:30:07.605 ' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.605 --rc genhtml_branch_coverage=1 00:30:07.605 --rc genhtml_function_coverage=1 00:30:07.605 --rc genhtml_legend=1 00:30:07.605 --rc geninfo_all_blocks=1 00:30:07.605 --rc geninfo_unexecuted_blocks=1 00:30:07.605 00:30:07.605 ' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.605 --rc genhtml_branch_coverage=1 00:30:07.605 --rc genhtml_function_coverage=1 00:30:07.605 --rc genhtml_legend=1 00:30:07.605 --rc geninfo_all_blocks=1 00:30:07.605 --rc geninfo_unexecuted_blocks=1 00:30:07.605 00:30:07.605 ' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:07.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.605 --rc genhtml_branch_coverage=1 00:30:07.605 --rc genhtml_function_coverage=1 00:30:07.605 --rc genhtml_legend=1 00:30:07.605 --rc geninfo_all_blocks=1 00:30:07.605 --rc geninfo_unexecuted_blocks=1 00:30:07.605 00:30:07.605 ' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.605 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:30:07.606 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:30:07.866 Error setting digest 00:30:07.866 4002F242337F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:30:07.866 4002F242337F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.866 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:11.164 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:11.164 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.164 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:11.165 Found net devices under 0000:84:00.0: cvl_0_0 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:11.165 Found net devices under 0000:84:00.1: cvl_0_1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:30:11.165 00:30:11.165 --- 10.0.0.2 ping statistics --- 00:30:11.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.165 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:11.165 00:30:11.165 --- 10.0.0.1 ping statistics --- 00:30:11.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.165 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2150392 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2150392 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2150392 ']' 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.165 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:11.165 [2024-12-09 10:40:55.706921] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:11.165 [2024-12-09 10:40:55.707088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.425 [2024-12-09 10:40:55.880535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.425 [2024-12-09 10:40:55.998180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.425 [2024-12-09 10:40:55.998307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.425 [2024-12-09 10:40:55.998362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.425 [2024-12-09 10:40:55.998394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.425 [2024-12-09 10:40:55.998421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.425 [2024-12-09 10:40:55.999373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.FKu 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.FKu 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.FKu 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.FKu 00:30:11.685 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:12.624 [2024-12-09 10:40:56.929620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.624 [2024-12-09 10:40:56.946830] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:12.624 [2024-12-09 10:40:56.947302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.624 malloc0 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2150547 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2150547 /var/tmp/bdevperf.sock 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2150547 ']' 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.624 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:12.624 [2024-12-09 10:40:57.215854] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:12.624 [2024-12-09 10:40:57.216044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150547 ] 00:30:12.884 [2024-12-09 10:40:57.385136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.884 [2024-12-09 10:40:57.507234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.242 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.242 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:30:13.242 10:40:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.FKu 00:30:13.851 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:14.110 [2024-12-09 10:40:58.715132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:14.370 TLSTESTn1 00:30:14.370 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:14.370 Running I/O for 10 seconds... 00:30:16.679 1480.00 IOPS, 5.78 MiB/s [2024-12-09T09:41:02.268Z] 1528.00 IOPS, 5.97 MiB/s [2024-12-09T09:41:03.199Z] 1519.33 IOPS, 5.93 MiB/s [2024-12-09T09:41:04.134Z] 1511.25 IOPS, 5.90 MiB/s [2024-12-09T09:41:05.069Z] 1506.20 IOPS, 5.88 MiB/s [2024-12-09T09:41:06.446Z] 1656.33 IOPS, 6.47 MiB/s [2024-12-09T09:41:07.377Z] 1732.43 IOPS, 6.77 MiB/s [2024-12-09T09:41:08.309Z] 1852.25 IOPS, 7.24 MiB/s [2024-12-09T09:41:09.240Z] 1859.33 IOPS, 7.26 MiB/s [2024-12-09T09:41:09.240Z] 1969.80 IOPS, 7.69 MiB/s 00:30:24.586 Latency(us) 00:30:24.586 [2024-12-09T09:41:09.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.586 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:24.586 Verification LBA range: start 0x0 length 0x2000 00:30:24.586 TLSTESTn1 : 10.04 1973.95 7.71 0.00 0.00 64670.89 13592.65 62137.84 00:30:24.586 [2024-12-09T09:41:09.240Z] =================================================================================================================== 00:30:24.586 [2024-12-09T09:41:09.240Z] Total : 1973.95 7.71 0.00 0.00 64670.89 13592.65 62137.84 00:30:24.586 { 00:30:24.586 "results": [ 00:30:24.586 { 00:30:24.586 "job": "TLSTESTn1", 00:30:24.586 "core_mask": "0x4", 00:30:24.586 "workload": "verify", 00:30:24.586 "status": "finished", 00:30:24.586 "verify_range": { 00:30:24.586 "start": 0, 00:30:24.586 "length": 8192 00:30:24.586 }, 00:30:24.586 "queue_depth": 128, 00:30:24.586 "io_size": 4096, 00:30:24.586 "runtime": 10.041795, 00:30:24.586 "iops": 1973.949876491205, 00:30:24.586 "mibps": 7.71074170504377, 00:30:24.586 "io_failed": 0, 00:30:24.586 "io_timeout": 0, 00:30:24.586 "avg_latency_us": 64670.8930602361, 00:30:24.586 "min_latency_us": 13592.651851851851, 00:30:24.586 "max_latency_us": 62137.83703703704 00:30:24.586 } 00:30:24.586 ], 00:30:24.586 "core_count": 1 00:30:24.586 } 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:24.586 nvmf_trace.0 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2150547 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2150547 ']' 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2150547 00:30:24.586 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2150547 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2150547' 00:30:24.844 killing process with pid 2150547 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2150547 00:30:24.844 Received shutdown signal, test time was about 10.000000 seconds 00:30:24.844 00:30:24.844 Latency(us) 00:30:24.844 [2024-12-09T09:41:09.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.844 [2024-12-09T09:41:09.498Z] =================================================================================================================== 00:30:24.844 [2024-12-09T09:41:09.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.844 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2150547 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.105 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.105 rmmod nvme_tcp 00:30:25.105 rmmod nvme_fabrics 00:30:25.105 rmmod nvme_keyring 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2150392 ']' 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2150392 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2150392 ']' 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2150392 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2150392 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2150392' 00:30:25.367 killing process with pid 2150392 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2150392 00:30:25.367 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2150392 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.625 10:41:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.FKu 00:30:28.159 00:30:28.159 real 0m20.466s 00:30:28.159 user 0m27.170s 00:30:28.159 sys 0m7.199s 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:28.159 ************************************ 00:30:28.159 END TEST nvmf_fips 00:30:28.159 ************************************ 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:28.159 ************************************ 00:30:28.159 START TEST nvmf_control_msg_list 00:30:28.159 ************************************ 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:30:28.159 * Looking for test storage... 00:30:28.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:28.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.159 --rc genhtml_branch_coverage=1 00:30:28.159 --rc genhtml_function_coverage=1 00:30:28.159 --rc genhtml_legend=1 00:30:28.159 --rc geninfo_all_blocks=1 00:30:28.159 --rc geninfo_unexecuted_blocks=1 00:30:28.159 00:30:28.159 ' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:28.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.159 --rc genhtml_branch_coverage=1 00:30:28.159 --rc genhtml_function_coverage=1 00:30:28.159 --rc genhtml_legend=1 00:30:28.159 --rc geninfo_all_blocks=1 00:30:28.159 --rc geninfo_unexecuted_blocks=1 00:30:28.159 00:30:28.159 ' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:28.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.159 --rc genhtml_branch_coverage=1 00:30:28.159 --rc genhtml_function_coverage=1 00:30:28.159 --rc genhtml_legend=1 00:30:28.159 --rc geninfo_all_blocks=1 00:30:28.159 --rc geninfo_unexecuted_blocks=1 00:30:28.159 00:30:28.159 ' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:28.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.159 --rc genhtml_branch_coverage=1 00:30:28.159 --rc genhtml_function_coverage=1 00:30:28.159 --rc genhtml_legend=1 00:30:28.159 --rc geninfo_all_blocks=1 00:30:28.159 --rc geninfo_unexecuted_blocks=1 00:30:28.159 00:30:28.159 ' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.159 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:28.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.160 10:41:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:31.450 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:31.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.450 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:31.451 Found net devices under 0000:84:00.0: cvl_0_0 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:31.451 Found net devices under 0000:84:00.1: cvl_0_1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:30:31.451 00:30:31.451 --- 10.0.0.2 ping statistics --- 00:30:31.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.451 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:30:31.451 00:30:31.451 --- 10.0.0.1 ping statistics --- 00:30:31.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.451 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2154082 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2154082 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2154082 ']' 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.451 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.451 [2024-12-09 10:41:15.874307] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:31.451 [2024-12-09 10:41:15.874498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.451 [2024-12-09 10:41:16.060332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.709 [2024-12-09 10:41:16.177181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.709 [2024-12-09 10:41:16.177295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.709 [2024-12-09 10:41:16.177332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.709 [2024-12-09 10:41:16.177362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.709 [2024-12-09 10:41:16.177388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.709 [2024-12-09 10:41:16.178748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 [2024-12-09 10:41:16.532681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.967 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.967 Malloc0 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:31.968 [2024-12-09 10:41:16.599262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2154225 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2154226 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2154227 00:30:31.968 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2154225 00:30:32.226 [2024-12-09 10:41:16.679780] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:32.226 [2024-12-09 10:41:16.680091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:32.226 [2024-12-09 10:41:16.690534] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.161 Initializing NVMe Controllers 00:30:33.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:30:33.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:30:33.161 Initialization complete. Launching workers. 00:30:33.161 ======================================================== 00:30:33.161 Latency(us) 00:30:33.161 Device Information : IOPS MiB/s Average min max 00:30:33.161 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2685.00 10.49 371.68 159.52 671.61 00:30:33.161 ======================================================== 00:30:33.161 Total : 2685.00 10.49 371.68 159.52 671.61 00:30:33.161 00:30:33.162 [2024-12-09 10:41:17.704012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c70 is same with the state(6) to be set 00:30:33.162 Initializing NVMe Controllers 00:30:33.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:30:33.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:30:33.162 Initialization complete. Launching workers. 00:30:33.162 ======================================================== 00:30:33.162 Latency(us) 00:30:33.162 Device Information : IOPS MiB/s Average min max 00:30:33.162 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2558.00 9.99 390.15 171.77 860.21 00:30:33.162 ======================================================== 00:30:33.162 Total : 2558.00 9.99 390.15 171.77 860.21 00:30:33.162 00:30:33.162 Initializing NVMe Controllers 00:30:33.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:30:33.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:30:33.162 Initialization complete. Launching workers. 00:30:33.162 ======================================================== 00:30:33.162 Latency(us) 00:30:33.162 Device Information : IOPS MiB/s Average min max 00:30:33.162 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2176.00 8.50 458.66 183.74 654.11 00:30:33.162 ======================================================== 00:30:33.162 Total : 2176.00 8.50 458.66 183.74 654.11 00:30:33.162 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2154226 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2154227 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.423 rmmod nvme_tcp 00:30:33.423 rmmod nvme_fabrics 00:30:33.423 rmmod nvme_keyring 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2154082 ']' 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2154082 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2154082 ']' 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2154082 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154082 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154082' 00:30:33.423 killing process with pid 2154082 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2154082 00:30:33.423 10:41:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2154082 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.989 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.890 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.891 00:30:35.891 real 0m8.169s 00:30:35.891 user 0m6.591s 00:30:35.891 sys 0m3.836s 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:30:35.891 ************************************ 00:30:35.891 END TEST nvmf_control_msg_list 00:30:35.891 ************************************ 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:35.891 ************************************ 00:30:35.891 START TEST nvmf_wait_for_buf 00:30:35.891 ************************************ 00:30:35.891 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:30:36.151 * Looking for test storage... 00:30:36.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:36.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.151 --rc genhtml_branch_coverage=1 00:30:36.151 --rc genhtml_function_coverage=1 00:30:36.151 --rc genhtml_legend=1 00:30:36.151 --rc geninfo_all_blocks=1 00:30:36.151 --rc geninfo_unexecuted_blocks=1 00:30:36.151 00:30:36.151 ' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:36.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.151 --rc genhtml_branch_coverage=1 00:30:36.151 --rc genhtml_function_coverage=1 00:30:36.151 --rc genhtml_legend=1 00:30:36.151 --rc geninfo_all_blocks=1 00:30:36.151 --rc geninfo_unexecuted_blocks=1 00:30:36.151 00:30:36.151 ' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:36.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.151 --rc genhtml_branch_coverage=1 00:30:36.151 --rc genhtml_function_coverage=1 00:30:36.151 --rc genhtml_legend=1 00:30:36.151 --rc geninfo_all_blocks=1 00:30:36.151 --rc geninfo_unexecuted_blocks=1 00:30:36.151 00:30:36.151 ' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:36.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.151 --rc genhtml_branch_coverage=1 00:30:36.151 --rc genhtml_function_coverage=1 00:30:36.151 --rc genhtml_legend=1 00:30:36.151 --rc geninfo_all_blocks=1 00:30:36.151 --rc geninfo_unexecuted_blocks=1 00:30:36.151 00:30:36.151 ' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.151 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.152 10:41:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.445 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:39.446 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:39.446 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:39.446 Found net devices under 0000:84:00.0: cvl_0_0 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:39.446 Found net devices under 0000:84:00.1: cvl_0_1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.446 10:41:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:30:39.446 00:30:39.446 --- 10.0.0.2 ping statistics --- 00:30:39.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.446 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:39.446 00:30:39.446 --- 10.0.0.1 ping statistics --- 00:30:39.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.446 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.446 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.447 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.447 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.447 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.447 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.447 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2156452 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2156452 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2156452 ']' 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.706 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:39.706 [2024-12-09 10:41:24.245351] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:30:39.706 [2024-12-09 10:41:24.245536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.013 [2024-12-09 10:41:24.412630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.013 [2024-12-09 10:41:24.517563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.013 [2024-12-09 10:41:24.517683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.013 [2024-12-09 10:41:24.517735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.013 [2024-12-09 10:41:24.517770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.013 [2024-12-09 10:41:24.517798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.013 [2024-12-09 10:41:24.519166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.389 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 Malloc0 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 [2024-12-09 10:41:25.891312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:41.390 [2024-12-09 10:41:25.927654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.390 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.648 [2024-12-09 10:41:26.102027] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:43.026 Initializing NVMe Controllers 00:30:43.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:30:43.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:30:43.026 Initialization complete. Launching workers. 00:30:43.026 ======================================================== 00:30:43.026 Latency(us) 00:30:43.026 Device Information : IOPS MiB/s Average min max 00:30:43.026 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 26.96 3.37 157334.00 6746.95 211479.60 00:30:43.026 ======================================================== 00:30:43.026 Total : 26.96 3.37 157334.00 6746.95 211479.60 00:30:43.026 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=406 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 406 -eq 0 ]] 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.286 rmmod nvme_tcp 00:30:43.286 rmmod nvme_fabrics 00:30:43.286 rmmod nvme_keyring 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2156452 ']' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2156452 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2156452 ']' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2156452 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2156452 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2156452' 00:30:43.286 killing process with pid 2156452 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2156452 00:30:43.286 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2156452 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.918 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.828 00:30:45.828 real 0m9.804s 00:30:45.828 user 0m5.793s 00:30:45.828 sys 0m3.345s 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:30:45.828 ************************************ 00:30:45.828 END TEST nvmf_wait_for_buf 00:30:45.828 ************************************ 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.828 10:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:49.120 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:49.120 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:49.120 Found net devices under 0000:84:00.0: cvl_0_0 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:49.120 Found net devices under 0000:84:00.1: cvl_0_1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:49.120 ************************************ 00:30:49.120 START TEST nvmf_perf_adq 00:30:49.120 ************************************ 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:49.120 * Looking for test storage... 00:30:49.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.120 --rc genhtml_branch_coverage=1 00:30:49.120 --rc genhtml_function_coverage=1 00:30:49.120 --rc genhtml_legend=1 00:30:49.120 --rc geninfo_all_blocks=1 00:30:49.120 --rc geninfo_unexecuted_blocks=1 00:30:49.120 00:30:49.120 ' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.120 --rc genhtml_branch_coverage=1 00:30:49.120 --rc genhtml_function_coverage=1 00:30:49.120 --rc genhtml_legend=1 00:30:49.120 --rc geninfo_all_blocks=1 00:30:49.120 --rc geninfo_unexecuted_blocks=1 00:30:49.120 00:30:49.120 ' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.120 --rc genhtml_branch_coverage=1 00:30:49.120 --rc genhtml_function_coverage=1 00:30:49.120 --rc genhtml_legend=1 00:30:49.120 --rc geninfo_all_blocks=1 00:30:49.120 --rc geninfo_unexecuted_blocks=1 00:30:49.120 00:30:49.120 ' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:49.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.120 --rc genhtml_branch_coverage=1 00:30:49.120 --rc genhtml_function_coverage=1 00:30:49.120 --rc genhtml_legend=1 00:30:49.120 --rc geninfo_all_blocks=1 00:30:49.120 --rc geninfo_unexecuted_blocks=1 00:30:49.120 00:30:49.120 ' 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:49.120 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:49.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.121 10:41:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:52.423 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:52.423 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:52.423 Found net devices under 0000:84:00.0: cvl_0_0 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:52.423 Found net devices under 0000:84:00.1: cvl_0_1 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:52.423 10:41:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:52.682 10:41:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:55.215 10:41:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:00.489 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:00.489 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.489 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:00.490 Found net devices under 0000:84:00.0: cvl_0_0 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:00.490 Found net devices under 0000:84:00.1: cvl_0_1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:31:00.490 00:31:00.490 --- 10.0.0.2 ping statistics --- 00:31:00.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.490 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:31:00.490 00:31:00.490 --- 10.0.0.1 ping statistics --- 00:31:00.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.490 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2161613 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2161613 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2161613 ']' 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.490 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.490 [2024-12-09 10:41:44.849195] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:00.490 [2024-12-09 10:41:44.849298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.490 [2024-12-09 10:41:44.985687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.490 [2024-12-09 10:41:45.106810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.490 [2024-12-09 10:41:45.106931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.490 [2024-12-09 10:41:45.106986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.490 [2024-12-09 10:41:45.107032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.490 [2024-12-09 10:41:45.107070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.490 [2024-12-09 10:41:45.110696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.490 [2024-12-09 10:41:45.110840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.490 [2024-12-09 10:41:45.110836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.490 [2024-12-09 10:41:45.110805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.750 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:00.751 [2024-12-09 10:41:45.372439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.751 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.008 Malloc1 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:01.008 [2024-12-09 10:41:45.436047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2161732 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:31:01.008 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:02.904 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:31:02.904 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.904 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:02.904 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.904 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:31:02.904 "tick_rate": 2700000000, 00:31:02.904 "poll_groups": [ 00:31:02.904 { 00:31:02.904 "name": "nvmf_tgt_poll_group_000", 00:31:02.904 "admin_qpairs": 1, 00:31:02.904 "io_qpairs": 1, 00:31:02.904 "current_admin_qpairs": 1, 00:31:02.904 "current_io_qpairs": 1, 00:31:02.904 "pending_bdev_io": 0, 00:31:02.904 "completed_nvme_io": 18899, 00:31:02.904 "transports": [ 00:31:02.904 { 00:31:02.904 "trtype": "TCP" 00:31:02.904 } 00:31:02.904 ] 00:31:02.904 }, 00:31:02.904 { 00:31:02.904 "name": "nvmf_tgt_poll_group_001", 00:31:02.904 "admin_qpairs": 0, 00:31:02.904 "io_qpairs": 1, 00:31:02.904 "current_admin_qpairs": 0, 00:31:02.904 "current_io_qpairs": 1, 00:31:02.904 "pending_bdev_io": 0, 00:31:02.904 "completed_nvme_io": 18933, 00:31:02.904 "transports": [ 00:31:02.904 { 00:31:02.904 "trtype": "TCP" 00:31:02.904 } 00:31:02.904 ] 00:31:02.904 }, 00:31:02.904 { 00:31:02.904 "name": "nvmf_tgt_poll_group_002", 00:31:02.904 "admin_qpairs": 0, 00:31:02.904 "io_qpairs": 1, 00:31:02.904 "current_admin_qpairs": 0, 00:31:02.904 "current_io_qpairs": 1, 00:31:02.904 "pending_bdev_io": 0, 00:31:02.904 "completed_nvme_io": 19176, 00:31:02.904 "transports": [ 00:31:02.904 { 00:31:02.904 "trtype": "TCP" 00:31:02.904 } 00:31:02.904 ] 00:31:02.904 }, 00:31:02.904 { 00:31:02.904 "name": "nvmf_tgt_poll_group_003", 00:31:02.905 "admin_qpairs": 0, 00:31:02.905 "io_qpairs": 1, 00:31:02.905 "current_admin_qpairs": 0, 00:31:02.905 "current_io_qpairs": 1, 00:31:02.905 "pending_bdev_io": 0, 00:31:02.905 "completed_nvme_io": 18836, 00:31:02.905 "transports": [ 00:31:02.905 { 00:31:02.905 "trtype": "TCP" 00:31:02.905 } 00:31:02.905 ] 00:31:02.905 } 00:31:02.905 ] 00:31:02.905 }' 00:31:02.905 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:31:02.905 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:31:02.905 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:31:02.905 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:31:02.905 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2161732 00:31:11.010 Initializing NVMe Controllers 00:31:11.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:11.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:11.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:11.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:11.010 Initialization complete. Launching workers. 00:31:11.010 ======================================================== 00:31:11.010 Latency(us) 00:31:11.010 Device Information : IOPS MiB/s Average min max 00:31:11.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10067.20 39.32 6358.36 1763.90 10914.63 00:31:11.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10178.80 39.76 6289.49 2466.19 10624.31 00:31:11.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10242.20 40.01 6248.55 2620.46 10241.15 00:31:11.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10107.40 39.48 6331.44 2481.31 10787.14 00:31:11.011 ======================================================== 00:31:11.011 Total : 40595.59 158.58 6306.69 1763.90 10914.63 00:31:11.011 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.011 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.011 rmmod nvme_tcp 00:31:11.011 rmmod nvme_fabrics 00:31:11.011 rmmod nvme_keyring 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2161613 ']' 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2161613 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2161613 ']' 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2161613 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161613 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161613' 00:31:11.270 killing process with pid 2161613 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2161613 00:31:11.270 10:41:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2161613 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:31:11.529 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.530 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.530 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.530 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.530 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.065 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.065 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:31:14.065 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:31:14.065 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:31:14.323 10:41:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:31:16.228 10:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:21.504 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:21.504 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:21.504 Found net devices under 0000:84:00.0: cvl_0_0 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.504 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:21.505 Found net devices under 0000:84:00.1: cvl_0_1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.505 10:42:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:31:21.505 00:31:21.505 --- 10.0.0.2 ping statistics --- 00:31:21.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.505 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:31:21.505 00:31:21.505 --- 10.0.0.1 ping statistics --- 00:31:21.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.505 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:31:21.505 net.core.busy_poll = 1 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:31:21.505 net.core.busy_read = 1 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:31:21.505 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2164453 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2164453 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2164453 ']' 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.765 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.766 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.766 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.766 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:21.766 [2024-12-09 10:42:06.324761] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:21.766 [2024-12-09 10:42:06.324878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.025 [2024-12-09 10:42:06.424617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.025 [2024-12-09 10:42:06.499840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.025 [2024-12-09 10:42:06.499920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.025 [2024-12-09 10:42:06.499938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.025 [2024-12-09 10:42:06.499955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.025 [2024-12-09 10:42:06.499980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.025 [2024-12-09 10:42:06.502110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.025 [2024-12-09 10:42:06.502170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.025 [2024-12-09 10:42:06.502245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.025 [2024-12-09 10:42:06.502248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.025 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 [2024-12-09 10:42:06.855840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 Malloc1 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:22.285 [2024-12-09 10:42:06.923293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2164490 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:31:22.285 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:31:24.866 "tick_rate": 2700000000, 00:31:24.866 "poll_groups": [ 00:31:24.866 { 00:31:24.866 "name": "nvmf_tgt_poll_group_000", 00:31:24.866 "admin_qpairs": 1, 00:31:24.866 "io_qpairs": 2, 00:31:24.866 "current_admin_qpairs": 1, 00:31:24.866 "current_io_qpairs": 2, 00:31:24.866 "pending_bdev_io": 0, 00:31:24.866 "completed_nvme_io": 24222, 00:31:24.866 "transports": [ 00:31:24.866 { 00:31:24.866 "trtype": "TCP" 00:31:24.866 } 00:31:24.866 ] 00:31:24.866 }, 00:31:24.866 { 00:31:24.866 "name": "nvmf_tgt_poll_group_001", 00:31:24.866 "admin_qpairs": 0, 00:31:24.866 "io_qpairs": 2, 00:31:24.866 "current_admin_qpairs": 0, 00:31:24.866 "current_io_qpairs": 2, 00:31:24.866 "pending_bdev_io": 0, 00:31:24.866 "completed_nvme_io": 24481, 00:31:24.866 "transports": [ 00:31:24.866 { 00:31:24.866 "trtype": "TCP" 00:31:24.866 } 00:31:24.866 ] 00:31:24.866 }, 00:31:24.866 { 00:31:24.866 "name": "nvmf_tgt_poll_group_002", 00:31:24.866 "admin_qpairs": 0, 00:31:24.866 "io_qpairs": 0, 00:31:24.866 "current_admin_qpairs": 0, 00:31:24.866 "current_io_qpairs": 0, 00:31:24.866 "pending_bdev_io": 0, 00:31:24.866 "completed_nvme_io": 0, 00:31:24.866 "transports": [ 00:31:24.866 { 00:31:24.866 "trtype": "TCP" 00:31:24.866 } 00:31:24.866 ] 00:31:24.866 }, 00:31:24.866 { 00:31:24.866 "name": "nvmf_tgt_poll_group_003", 00:31:24.866 "admin_qpairs": 0, 00:31:24.866 "io_qpairs": 0, 00:31:24.866 "current_admin_qpairs": 0, 00:31:24.866 "current_io_qpairs": 0, 00:31:24.866 "pending_bdev_io": 0, 00:31:24.866 "completed_nvme_io": 0, 00:31:24.866 "transports": [ 00:31:24.866 { 00:31:24.866 "trtype": "TCP" 00:31:24.866 } 00:31:24.866 ] 00:31:24.866 } 00:31:24.866 ] 00:31:24.866 }' 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:24.866 10:42:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:31:24.866 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:31:24.866 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:31:24.866 10:42:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2164490 00:31:33.045 Initializing NVMe Controllers 00:31:33.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:33.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:33.045 Initialization complete. Launching workers. 00:31:33.045 ======================================================== 00:31:33.045 Latency(us) 00:31:33.045 Device Information : IOPS MiB/s Average min max 00:31:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5554.00 21.70 11525.69 1720.30 54741.55 00:31:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6202.00 24.23 10322.15 1740.14 54155.19 00:31:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7692.00 30.05 8321.61 1748.21 54504.55 00:31:33.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6969.20 27.22 9186.10 1394.38 53955.11 00:31:33.045 ======================================================== 00:31:33.045 Total : 26417.19 103.19 9692.98 1394.38 54741.55 00:31:33.045 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.045 rmmod nvme_tcp 00:31:33.045 rmmod nvme_fabrics 00:31:33.045 rmmod nvme_keyring 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2164453 ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2164453 ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2164453' 00:31:33.045 killing process with pid 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2164453 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.045 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.046 10:42:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:31:36.333 00:31:36.333 real 0m47.341s 00:31:36.333 user 2m42.840s 00:31:36.333 sys 0m10.673s 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:36.333 ************************************ 00:31:36.333 END TEST nvmf_perf_adq 00:31:36.333 ************************************ 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:36.333 ************************************ 00:31:36.333 START TEST nvmf_shutdown 00:31:36.333 ************************************ 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:36.333 * Looking for test storage... 00:31:36.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:31:36.333 10:42:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.593 --rc genhtml_branch_coverage=1 00:31:36.593 --rc genhtml_function_coverage=1 00:31:36.593 --rc genhtml_legend=1 00:31:36.593 --rc geninfo_all_blocks=1 00:31:36.593 --rc geninfo_unexecuted_blocks=1 00:31:36.593 00:31:36.593 ' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.593 --rc genhtml_branch_coverage=1 00:31:36.593 --rc genhtml_function_coverage=1 00:31:36.593 --rc genhtml_legend=1 00:31:36.593 --rc geninfo_all_blocks=1 00:31:36.593 --rc geninfo_unexecuted_blocks=1 00:31:36.593 00:31:36.593 ' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.593 --rc genhtml_branch_coverage=1 00:31:36.593 --rc genhtml_function_coverage=1 00:31:36.593 --rc genhtml_legend=1 00:31:36.593 --rc geninfo_all_blocks=1 00:31:36.593 --rc geninfo_unexecuted_blocks=1 00:31:36.593 00:31:36.593 ' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:36.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.593 --rc genhtml_branch_coverage=1 00:31:36.593 --rc genhtml_function_coverage=1 00:31:36.593 --rc genhtml_legend=1 00:31:36.593 --rc geninfo_all_blocks=1 00:31:36.593 --rc geninfo_unexecuted_blocks=1 00:31:36.593 00:31:36.593 ' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.593 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:36.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:36.594 ************************************ 00:31:36.594 START TEST nvmf_shutdown_tc1 00:31:36.594 ************************************ 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:36.594 10:42:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:31:39.878 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:39.879 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:39.879 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:39.879 Found net devices under 0000:84:00.0: cvl_0_0 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:39.879 Found net devices under 0000:84:00.1: cvl_0_1 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.879 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:31:39.880 00:31:39.880 --- 10.0.0.2 ping statistics --- 00:31:39.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.880 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:31:39.880 00:31:39.880 --- 10.0.0.1 ping statistics --- 00:31:39.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.880 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2168426 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2168426 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2168426 ']' 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.880 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:39.880 [2024-12-09 10:42:24.383946] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:39.880 [2024-12-09 10:42:24.384050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.880 [2024-12-09 10:42:24.472196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.138 [2024-12-09 10:42:24.540693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.138 [2024-12-09 10:42:24.540782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.138 [2024-12-09 10:42:24.540800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.138 [2024-12-09 10:42:24.540814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.138 [2024-12-09 10:42:24.540826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.138 [2024-12-09 10:42:24.542750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.138 [2024-12-09 10:42:24.542780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.138 [2024-12-09 10:42:24.542837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:40.138 [2024-12-09 10:42:24.542841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.138 [2024-12-09 10:42:24.715460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.138 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:40.139 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:31:40.139 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:40.139 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.139 10:42:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.396 Malloc1 00:31:40.396 [2024-12-09 10:42:24.827139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.396 Malloc2 00:31:40.396 Malloc3 00:31:40.396 Malloc4 00:31:40.396 Malloc5 00:31:40.396 Malloc6 00:31:40.659 Malloc7 00:31:40.659 Malloc8 00:31:40.659 Malloc9 00:31:40.659 Malloc10 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2168606 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2168606 /var/tmp/bdevperf.sock 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2168606 ']' 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.659 { 00:31:40.659 "params": { 00:31:40.659 "name": "Nvme$subsystem", 00:31:40.659 "trtype": "$TEST_TRANSPORT", 00:31:40.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.659 "adrfam": "ipv4", 00:31:40.659 "trsvcid": "$NVMF_PORT", 00:31:40.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.659 "hdgst": ${hdgst:-false}, 00:31:40.659 "ddgst": ${ddgst:-false} 00:31:40.659 }, 00:31:40.659 "method": "bdev_nvme_attach_controller" 00:31:40.659 } 00:31:40.659 EOF 00:31:40.659 )") 00:31:40.659 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.917 "method": "bdev_nvme_attach_controller" 00:31:40.917 } 00:31:40.917 EOF 00:31:40.917 )") 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.917 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.917 { 00:31:40.917 "params": { 00:31:40.917 "name": "Nvme$subsystem", 00:31:40.917 "trtype": "$TEST_TRANSPORT", 00:31:40.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.917 "adrfam": "ipv4", 00:31:40.917 "trsvcid": "$NVMF_PORT", 00:31:40.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.917 "hdgst": ${hdgst:-false}, 00:31:40.917 "ddgst": ${ddgst:-false} 00:31:40.917 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 } 00:31:40.918 EOF 00:31:40.918 )") 00:31:40.918 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:40.918 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:31:40.918 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:31:40.918 10:42:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme1", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme2", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme3", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme4", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme5", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme6", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme7", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme8", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme9", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 },{ 00:31:40.918 "params": { 00:31:40.918 "name": "Nvme10", 00:31:40.918 "trtype": "tcp", 00:31:40.918 "traddr": "10.0.0.2", 00:31:40.918 "adrfam": "ipv4", 00:31:40.918 "trsvcid": "4420", 00:31:40.918 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:40.918 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:40.918 "hdgst": false, 00:31:40.918 "ddgst": false 00:31:40.918 }, 00:31:40.918 "method": "bdev_nvme_attach_controller" 00:31:40.918 }' 00:31:40.918 [2024-12-09 10:42:25.380005] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:40.918 [2024-12-09 10:42:25.380110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:40.918 [2024-12-09 10:42:25.463111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.918 [2024-12-09 10:42:25.525308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2168606 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:31:43.446 10:42:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:31:44.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2168606 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2168426 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.383 "params": { 00:31:44.383 "name": "Nvme$subsystem", 00:31:44.383 "trtype": "$TEST_TRANSPORT", 00:31:44.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.383 "adrfam": "ipv4", 00:31:44.383 "trsvcid": "$NVMF_PORT", 00:31:44.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.383 "hdgst": ${hdgst:-false}, 00:31:44.383 "ddgst": ${ddgst:-false} 00:31:44.383 }, 00:31:44.383 "method": "bdev_nvme_attach_controller" 00:31:44.383 } 00:31:44.383 EOF 00:31:44.383 )") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.383 "params": { 00:31:44.383 "name": "Nvme$subsystem", 00:31:44.383 "trtype": "$TEST_TRANSPORT", 00:31:44.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.383 "adrfam": "ipv4", 00:31:44.383 "trsvcid": "$NVMF_PORT", 00:31:44.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.383 "hdgst": ${hdgst:-false}, 00:31:44.383 "ddgst": ${ddgst:-false} 00:31:44.383 }, 00:31:44.383 "method": "bdev_nvme_attach_controller" 00:31:44.383 } 00:31:44.383 EOF 00:31:44.383 )") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.383 "params": { 00:31:44.383 "name": "Nvme$subsystem", 00:31:44.383 "trtype": "$TEST_TRANSPORT", 00:31:44.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.383 "adrfam": "ipv4", 00:31:44.383 "trsvcid": "$NVMF_PORT", 00:31:44.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.383 "hdgst": ${hdgst:-false}, 00:31:44.383 "ddgst": ${ddgst:-false} 00:31:44.383 }, 00:31:44.383 "method": "bdev_nvme_attach_controller" 00:31:44.383 } 00:31:44.383 EOF 00:31:44.383 )") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.383 "params": { 00:31:44.383 "name": "Nvme$subsystem", 00:31:44.383 "trtype": "$TEST_TRANSPORT", 00:31:44.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.383 "adrfam": "ipv4", 00:31:44.383 "trsvcid": "$NVMF_PORT", 00:31:44.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.383 "hdgst": ${hdgst:-false}, 00:31:44.383 "ddgst": ${ddgst:-false} 00:31:44.383 }, 00:31:44.383 "method": "bdev_nvme_attach_controller" 00:31:44.383 } 00:31:44.383 EOF 00:31:44.383 )") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.383 "params": { 00:31:44.383 "name": "Nvme$subsystem", 00:31:44.383 "trtype": "$TEST_TRANSPORT", 00:31:44.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.383 "adrfam": "ipv4", 00:31:44.383 "trsvcid": "$NVMF_PORT", 00:31:44.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.383 "hdgst": ${hdgst:-false}, 00:31:44.383 "ddgst": ${ddgst:-false} 00:31:44.383 }, 00:31:44.383 "method": "bdev_nvme_attach_controller" 00:31:44.383 } 00:31:44.383 EOF 00:31:44.383 )") 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.383 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.383 { 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme$subsystem", 00:31:44.384 "trtype": "$TEST_TRANSPORT", 00:31:44.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "$NVMF_PORT", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.384 "hdgst": ${hdgst:-false}, 00:31:44.384 "ddgst": ${ddgst:-false} 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 } 00:31:44.384 EOF 00:31:44.384 )") 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.384 { 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme$subsystem", 00:31:44.384 "trtype": "$TEST_TRANSPORT", 00:31:44.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "$NVMF_PORT", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.384 "hdgst": ${hdgst:-false}, 00:31:44.384 "ddgst": ${ddgst:-false} 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 } 00:31:44.384 EOF 00:31:44.384 )") 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.384 { 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme$subsystem", 00:31:44.384 "trtype": "$TEST_TRANSPORT", 00:31:44.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "$NVMF_PORT", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.384 "hdgst": ${hdgst:-false}, 00:31:44.384 "ddgst": ${ddgst:-false} 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 } 00:31:44.384 EOF 00:31:44.384 )") 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.384 { 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme$subsystem", 00:31:44.384 "trtype": "$TEST_TRANSPORT", 00:31:44.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "$NVMF_PORT", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.384 "hdgst": ${hdgst:-false}, 00:31:44.384 "ddgst": ${ddgst:-false} 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 } 00:31:44.384 EOF 00:31:44.384 )") 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.384 { 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme$subsystem", 00:31:44.384 "trtype": "$TEST_TRANSPORT", 00:31:44.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "$NVMF_PORT", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.384 "hdgst": ${hdgst:-false}, 00:31:44.384 "ddgst": ${ddgst:-false} 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 } 00:31:44.384 EOF 00:31:44.384 )") 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:31:44.384 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme1", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme2", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme3", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme4", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme5", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme6", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme7", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme8", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme9", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 },{ 00:31:44.384 "params": { 00:31:44.384 "name": "Nvme10", 00:31:44.384 "trtype": "tcp", 00:31:44.384 "traddr": "10.0.0.2", 00:31:44.384 "adrfam": "ipv4", 00:31:44.384 "trsvcid": "4420", 00:31:44.384 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:44.384 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:44.384 "hdgst": false, 00:31:44.384 "ddgst": false 00:31:44.384 }, 00:31:44.384 "method": "bdev_nvme_attach_controller" 00:31:44.384 }' 00:31:44.384 [2024-12-09 10:42:28.852380] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:44.385 [2024-12-09 10:42:28.852481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169025 ] 00:31:44.385 [2024-12-09 10:42:28.946102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.385 [2024-12-09 10:42:29.008069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.288 Running I/O for 1 seconds... 00:31:47.484 1668.00 IOPS, 104.25 MiB/s 00:31:47.484 Latency(us) 00:31:47.484 [2024-12-09T09:42:32.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme1n1 : 1.15 222.09 13.88 0.00 0.00 283045.74 19126.80 270299.59 00:31:47.484 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme2n1 : 1.14 223.85 13.99 0.00 0.00 277666.32 32622.36 245444.46 00:31:47.484 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme3n1 : 1.13 229.73 14.36 0.00 0.00 264810.44 4708.88 254765.13 00:31:47.484 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme4n1 : 1.14 225.46 14.09 0.00 0.00 265898.67 18252.99 268746.15 00:31:47.484 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme5n1 : 1.12 171.76 10.73 0.00 0.00 342143.18 21748.24 287387.50 00:31:47.484 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme6n1 : 1.17 223.00 13.94 0.00 0.00 259118.35 1286.45 287387.50 00:31:47.484 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme7n1 : 1.16 221.51 13.84 0.00 0.00 256263.02 20000.62 268746.15 00:31:47.484 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme8n1 : 1.15 223.16 13.95 0.00 0.00 249423.27 38447.79 250104.79 00:31:47.484 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme9n1 : 1.17 220.92 13.81 0.00 0.00 247469.41 2779.21 288940.94 00:31:47.484 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:47.484 Verification LBA range: start 0x0 length 0x400 00:31:47.484 Nvme10n1 : 1.21 263.89 16.49 0.00 0.00 205011.06 5898.24 301368.51 00:31:47.484 [2024-12-09T09:42:32.138Z] =================================================================================================================== 00:31:47.484 [2024-12-09T09:42:32.138Z] Total : 2225.36 139.09 0.00 0.00 261641.04 1286.45 301368.51 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:47.743 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:47.743 rmmod nvme_tcp 00:31:47.743 rmmod nvme_fabrics 00:31:47.743 rmmod nvme_keyring 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2168426 ']' 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2168426 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2168426 ']' 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2168426 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168426 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168426' 00:31:48.002 killing process with pid 2168426 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2168426 00:31:48.002 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2168426 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.568 10:42:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.093 00:31:51.093 real 0m13.994s 00:31:51.093 user 0m40.097s 00:31:51.093 sys 0m4.397s 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:51.093 ************************************ 00:31:51.093 END TEST nvmf_shutdown_tc1 00:31:51.093 ************************************ 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:51.093 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:51.094 ************************************ 00:31:51.094 START TEST nvmf_shutdown_tc2 00:31:51.094 ************************************ 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:51.094 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:51.094 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:51.094 Found net devices under 0000:84:00.0: cvl_0_0 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:51.094 Found net devices under 0000:84:00.1: cvl_0_1 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.094 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:31:51.095 00:31:51.095 --- 10.0.0.2 ping statistics --- 00:31:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.095 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:31:51.095 00:31:51.095 --- 10.0.0.1 ping statistics --- 00:31:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.095 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2169804 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2169804 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2169804 ']' 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.095 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.095 [2024-12-09 10:42:35.598947] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:51.095 [2024-12-09 10:42:35.599134] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.095 [2024-12-09 10:42:35.736616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.352 [2024-12-09 10:42:35.805240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.352 [2024-12-09 10:42:35.805313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.352 [2024-12-09 10:42:35.805330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.352 [2024-12-09 10:42:35.805345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.352 [2024-12-09 10:42:35.805357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.352 [2024-12-09 10:42:35.807286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.352 [2024-12-09 10:42:35.807315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.352 [2024-12-09 10:42:35.807522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:51.352 [2024-12-09 10:42:35.807527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.352 [2024-12-09 10:42:35.976443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.352 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.353 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.353 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.353 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.353 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.353 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.353 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.353 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.353 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.610 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:51.610 Malloc1 00:31:51.610 [2024-12-09 10:42:36.089462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.610 Malloc2 00:31:51.610 Malloc3 00:31:51.610 Malloc4 00:31:51.868 Malloc5 00:31:51.868 Malloc6 00:31:51.868 Malloc7 00:31:51.868 Malloc8 00:31:51.868 Malloc9 00:31:51.868 Malloc10 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2169981 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2169981 /var/tmp/bdevperf.sock 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2169981 ']' 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.128 { 00:31:52.128 "params": { 00:31:52.128 "name": "Nvme$subsystem", 00:31:52.128 "trtype": "$TEST_TRANSPORT", 00:31:52.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.128 "adrfam": "ipv4", 00:31:52.128 "trsvcid": "$NVMF_PORT", 00:31:52.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.128 "hdgst": ${hdgst:-false}, 00:31:52.128 "ddgst": ${ddgst:-false} 00:31:52.128 }, 00:31:52.128 "method": "bdev_nvme_attach_controller" 00:31:52.128 } 00:31:52.128 EOF 00:31:52.128 )") 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.128 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.129 { 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme$subsystem", 00:31:52.129 "trtype": "$TEST_TRANSPORT", 00:31:52.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "$NVMF_PORT", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.129 "hdgst": ${hdgst:-false}, 00:31:52.129 "ddgst": ${ddgst:-false} 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 } 00:31:52.129 EOF 00:31:52.129 )") 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.129 { 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme$subsystem", 00:31:52.129 "trtype": "$TEST_TRANSPORT", 00:31:52.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "$NVMF_PORT", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.129 "hdgst": ${hdgst:-false}, 00:31:52.129 "ddgst": ${ddgst:-false} 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 } 00:31:52.129 EOF 00:31:52.129 )") 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.129 { 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme$subsystem", 00:31:52.129 "trtype": "$TEST_TRANSPORT", 00:31:52.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "$NVMF_PORT", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.129 "hdgst": ${hdgst:-false}, 00:31:52.129 "ddgst": ${ddgst:-false} 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 } 00:31:52.129 EOF 00:31:52.129 )") 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:52.129 { 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme$subsystem", 00:31:52.129 "trtype": "$TEST_TRANSPORT", 00:31:52.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "$NVMF_PORT", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.129 "hdgst": ${hdgst:-false}, 00:31:52.129 "ddgst": ${ddgst:-false} 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 } 00:31:52.129 EOF 00:31:52.129 )") 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:31:52.129 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme1", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme2", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme3", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme4", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme5", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme6", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme7", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme8", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme9", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 },{ 00:31:52.129 "params": { 00:31:52.129 "name": "Nvme10", 00:31:52.129 "trtype": "tcp", 00:31:52.129 "traddr": "10.0.0.2", 00:31:52.129 "adrfam": "ipv4", 00:31:52.129 "trsvcid": "4420", 00:31:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:52.129 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:52.129 "hdgst": false, 00:31:52.129 "ddgst": false 00:31:52.129 }, 00:31:52.129 "method": "bdev_nvme_attach_controller" 00:31:52.129 }' 00:31:52.129 [2024-12-09 10:42:36.652363] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:52.129 [2024-12-09 10:42:36.652458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169981 ] 00:31:52.129 [2024-12-09 10:42:36.740794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.388 [2024-12-09 10:42:36.802522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.293 Running I/O for 10 seconds... 00:31:54.293 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.293 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:54.293 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.294 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.552 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:31:54.552 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:31:54.552 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2169981 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2169981 ']' 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2169981 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169981 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169981' 00:31:54.813 killing process with pid 2169981 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2169981 00:31:54.813 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2169981 00:31:54.813 Received shutdown signal, test time was about 0.970643 seconds 00:31:54.813 00:31:54.813 Latency(us) 00:31:54.813 [2024-12-09T09:42:39.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme1n1 : 0.94 222.85 13.93 0.00 0.00 277332.37 16311.18 267192.70 00:31:54.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme2n1 : 0.95 202.21 12.64 0.00 0.00 306072.59 20583.16 279620.27 00:31:54.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme3n1 : 0.97 265.03 16.56 0.00 0.00 227457.33 29709.65 260978.92 00:31:54.813 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme4n1 : 0.97 263.97 16.50 0.00 0.00 224594.68 18350.08 271853.04 00:31:54.813 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme5n1 : 0.95 207.32 12.96 0.00 0.00 278267.81 2876.30 254765.13 00:31:54.813 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme6n1 : 0.93 214.30 13.39 0.00 0.00 259460.48 4975.88 245444.46 00:31:54.813 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme7n1 : 0.92 207.67 12.98 0.00 0.00 264867.59 51263.72 231463.44 00:31:54.813 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme8n1 : 0.93 211.91 13.24 0.00 0.00 251363.21 3519.53 267192.70 00:31:54.813 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme9n1 : 0.96 204.52 12.78 0.00 0.00 257831.78 2135.99 274959.93 00:31:54.813 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:54.813 Verification LBA range: start 0x0 length 0x400 00:31:54.813 Nvme10n1 : 0.96 199.74 12.48 0.00 0.00 258873.27 21554.06 290494.39 00:31:54.813 [2024-12-09T09:42:39.467Z] =================================================================================================================== 00:31:54.813 [2024-12-09T09:42:39.467Z] Total : 2199.52 137.47 0.00 0.00 258628.06 2135.99 290494.39 00:31:55.073 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2169804 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.452 rmmod nvme_tcp 00:31:56.452 rmmod nvme_fabrics 00:31:56.452 rmmod nvme_keyring 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2169804 ']' 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2169804 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2169804 ']' 00:31:56.452 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2169804 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169804 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169804' 00:31:56.453 killing process with pid 2169804 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2169804 00:31:56.453 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2169804 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:31:57.021 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.022 10:42:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.923 00:31:58.923 real 0m8.296s 00:31:58.923 user 0m25.319s 00:31:58.923 sys 0m1.783s 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:58.923 ************************************ 00:31:58.923 END TEST nvmf_shutdown_tc2 00:31:58.923 ************************************ 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.923 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:59.183 ************************************ 00:31:59.183 START TEST nvmf_shutdown_tc3 00:31:59.183 ************************************ 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:59.183 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.183 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:59.184 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:59.184 Found net devices under 0000:84:00.0: cvl_0_0 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:59.184 Found net devices under 0000:84:00.1: cvl_0_1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:31:59.184 00:31:59.184 --- 10.0.0.2 ping statistics --- 00:31:59.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.184 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:31:59.184 00:31:59.184 --- 10.0.0.1 ping statistics --- 00:31:59.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.184 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.184 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.185 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.185 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2170889 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2170889 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2170889 ']' 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.444 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.444 [2024-12-09 10:42:43.917120] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:31:59.444 [2024-12-09 10:42:43.917221] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.444 [2024-12-09 10:42:44.055753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.704 [2024-12-09 10:42:44.178523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.704 [2024-12-09 10:42:44.178629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.704 [2024-12-09 10:42:44.178667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.704 [2024-12-09 10:42:44.178697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.704 [2024-12-09 10:42:44.178737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.704 [2024-12-09 10:42:44.182236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.704 [2024-12-09 10:42:44.182337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.704 [2024-12-09 10:42:44.182387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:59.704 [2024-12-09 10:42:44.182391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.704 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.704 [2024-12-09 10:42:44.350901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.963 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.963 Malloc1 00:31:59.963 [2024-12-09 10:42:44.461963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.963 Malloc2 00:31:59.963 Malloc3 00:31:59.963 Malloc4 00:32:00.262 Malloc5 00:32:00.262 Malloc6 00:32:00.262 Malloc7 00:32:00.262 Malloc8 00:32:00.262 Malloc9 00:32:00.262 Malloc10 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2171066 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2171066 /var/tmp/bdevperf.sock 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2171066 ']' 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:00.521 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.521 { 00:32:00.521 "params": { 00:32:00.521 "name": "Nvme$subsystem", 00:32:00.521 "trtype": "$TEST_TRANSPORT", 00:32:00.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.521 "adrfam": "ipv4", 00:32:00.521 "trsvcid": "$NVMF_PORT", 00:32:00.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.521 "hdgst": ${hdgst:-false}, 00:32:00.521 "ddgst": ${ddgst:-false} 00:32:00.521 }, 00:32:00.521 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.522 { 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme$subsystem", 00:32:00.522 "trtype": "$TEST_TRANSPORT", 00:32:00.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "$NVMF_PORT", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.522 "hdgst": ${hdgst:-false}, 00:32:00.522 "ddgst": ${ddgst:-false} 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 } 00:32:00.522 EOF 00:32:00.522 )") 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:32:00.522 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:32:00.522 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:32:00.522 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme1", 00:32:00.522 "trtype": "tcp", 00:32:00.522 "traddr": "10.0.0.2", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "4420", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.522 "hdgst": false, 00:32:00.522 "ddgst": false 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 },{ 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme2", 00:32:00.522 "trtype": "tcp", 00:32:00.522 "traddr": "10.0.0.2", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "4420", 00:32:00.522 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:00.522 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:00.522 "hdgst": false, 00:32:00.522 "ddgst": false 00:32:00.522 }, 00:32:00.522 "method": "bdev_nvme_attach_controller" 00:32:00.522 },{ 00:32:00.522 "params": { 00:32:00.522 "name": "Nvme3", 00:32:00.522 "trtype": "tcp", 00:32:00.522 "traddr": "10.0.0.2", 00:32:00.522 "adrfam": "ipv4", 00:32:00.522 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme4", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme5", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme6", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme7", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme8", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme9", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 },{ 00:32:00.523 "params": { 00:32:00.523 "name": "Nvme10", 00:32:00.523 "trtype": "tcp", 00:32:00.523 "traddr": "10.0.0.2", 00:32:00.523 "adrfam": "ipv4", 00:32:00.523 "trsvcid": "4420", 00:32:00.523 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:00.523 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:00.523 "hdgst": false, 00:32:00.523 "ddgst": false 00:32:00.523 }, 00:32:00.523 "method": "bdev_nvme_attach_controller" 00:32:00.523 }' 00:32:00.523 [2024-12-09 10:42:45.014787] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:00.523 [2024-12-09 10:42:45.014880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171066 ] 00:32:00.523 [2024-12-09 10:42:45.093352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.523 [2024-12-09 10:42:45.153642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.476 Running I/O for 10 seconds... 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:32:02.735 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2170889 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2170889 ']' 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2170889 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.994 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170889 00:32:03.271 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:03.271 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:03.271 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170889' 00:32:03.271 killing process with pid 2170889 00:32:03.271 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2170889 00:32:03.271 10:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2170889 00:32:03.271 [2024-12-09 10:42:47.679907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.271 [2024-12-09 10:42:47.680640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.680841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378310 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.682986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.683290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109760 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.684933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.684957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.684971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.684984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.684997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.685009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.685027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.272 [2024-12-09 10:42:47.685040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.685775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23787e0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.687994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.273 [2024-12-09 10:42:47.688166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.688574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378cb0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.689358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c5a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.689594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a1cd0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.689792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac910 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.689955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.689975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.689991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.690005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.690018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.690045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.690070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.274 [2024-12-09 10:42:47.690084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.274 [2024-12-09 10:42:47.690096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac480 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.691981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.692003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.274 [2024-12-09 10:42:47.692016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.692655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23791a0 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694772] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.275 [2024-12-09 10:42:47.694791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694858] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.275 [2024-12-09 10:42:47.694867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.275 [2024-12-09 10:42:47.694887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with [2024-12-09 10:42:47.694936] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.276 the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.694994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695012] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.276 [2024-12-09 10:42:47.695019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.695419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379670 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696409] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.276 [2024-12-09 10:42:47.696903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.696998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.276 [2024-12-09 10:42:47.697405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.697715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379b40 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c5a0 (9): Bad file descriptor 00:32:03.277 [2024-12-09 10:42:47.699663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.277 [2024-12-09 10:42:47.699758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 10:42:47.699771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.277 the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.277 [2024-12-09 10:42:47.699798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.277 [2024-12-09 10:42:47.699811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.277 [2024-12-09 10:42:47.699818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.699823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.699836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 10:42:47.699848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with id:0 cdw10:00000000 cdw11:00000000 00:32:03.278 the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 10:42:47.699863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with [2024-12-09 10:42:47.699878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214110 is same the state(6) to be set 00:32:03.278 with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.699949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with [2024-12-09 10:42:47.699962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:32:03.278 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.699976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.699989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.699993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 10:42:47.700054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with id:0 cdw10:00000000 cdw11:00000000 00:32:03.278 the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 10:42:47.700069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with [2024-12-09 10:42:47.700083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d4640 is same the state(6) to be set 00:32:03.278 with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-09 10:42:47.700156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a010 is same with id:0 cdw10:00000000 cdw11:00000000 00:32:03.278 the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c9c60 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.700297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a1cd0 (9): Bad file descriptor 00:32:03.278 [2024-12-09 10:42:47.700326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac910 (9): Bad file descriptor 00:32:03.278 [2024-12-09 10:42:47.700357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac480 (9): Bad file descriptor 00:32:03.278 [2024-12-09 10:42:47.700405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.278 [2024-12-09 10:42:47.700511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.700524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ca8a0 is same with the state(6) to be set 00:32:03.278 [2024-12-09 10:42:47.701543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.701599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.701641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.701673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.278 [2024-12-09 10:42:47.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.278 [2024-12-09 10:42:47.701774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.701923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:42:47.701939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.701957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.701959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.701971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.701974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.701984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:1the state(6) to be set 00:32:03.279 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:03.279 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:42:47.702118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:42:47.702182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:1the state(6) to be set 00:32:03.279 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:03.279 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:42:47.702300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 [2024-12-09 10:42:47.702353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.279 [2024-12-09 10:42:47.702366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.279 [2024-12-09 10:42:47.702377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-12-09 10:42:47.702379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.279 the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 10:42:47.702393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1the state(6) to be set 00:32:03.280 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:03.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1[2024-12-09 10:42:47.702514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:03.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1[2024-12-09 10:42:47.702578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:32:03.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with [2024-12-09 10:42:47.702645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1the state(6) to be set 00:32:03.280 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109290 is same with the state(6) to be set 00:32:03.280 [2024-12-09 10:42:47.702709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.280 [2024-12-09 10:42:47.702959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.280 [2024-12-09 10:42:47.702972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.702988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.703696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.703763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:03.281 [2024-12-09 10:42:47.705408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:32:03.281 [2024-12-09 10:42:47.705481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef460 (9): Bad file descriptor 00:32:03.281 [2024-12-09 10:42:47.705574] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.281 [2024-12-09 10:42:47.707098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.281 [2024-12-09 10:42:47.707130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ef460 with addr=10.0.0.2, port=4420 00:32:03.281 [2024-12-09 10:42:47.707148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ef460 is same with the state(6) to be set 00:32:03.281 [2024-12-09 10:42:47.707239] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.281 [2024-12-09 10:42:47.707327] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:03.281 [2024-12-09 10:42:47.707392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.707414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.707437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.707454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.707471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.707485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.707501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.281 [2024-12-09 10:42:47.707516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.281 [2024-12-09 10:42:47.707532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.707547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.707563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.707577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.707593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.707614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.707645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.707661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.707677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.707698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c9860 is same with the state(6) to be set 00:32:03.282 [2024-12-09 10:42:47.707826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef460 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.708810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:32:03.282 [2024-12-09 10:42:47.708879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef640 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.708906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:32:03.282 [2024-12-09 10:42:47.708921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:32:03.282 [2024-12-09 10:42:47.708940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:32:03.282 [2024-12-09 10:42:47.708956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:32:03.282 [2024-12-09 10:42:47.709540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.282 [2024-12-09 10:42:47.709568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ef640 with addr=10.0.0.2, port=4420 00:32:03.282 [2024-12-09 10:42:47.709585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ef640 is same with the state(6) to be set 00:32:03.282 [2024-12-09 10:42:47.709658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef640 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.709700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214110 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.709755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d4640 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.709791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c9c60 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.709842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ca8a0 (9): Bad file descriptor 00:32:03.282 [2024-12-09 10:42:47.709957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:32:03.282 [2024-12-09 10:42:47.709978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:32:03.282 [2024-12-09 10:42:47.709993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:32:03.282 [2024-12-09 10:42:47.710006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:32:03.282 [2024-12-09 10:42:47.710078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.282 [2024-12-09 10:42:47.710727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.282 [2024-12-09 10:42:47.710747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.710967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.283 [2024-12-09 10:42:47.711777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.283 [2024-12-09 10:42:47.711792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.711986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.712017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.712048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.712080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.712112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.712144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.712159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d1640 is same with the state(6) to be set 00:32:03.284 [2024-12-09 10:42:47.713421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.713665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.713679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.730871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.730949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.730968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.730983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.730999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.284 [2024-12-09 10:42:47.731415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.284 [2024-12-09 10:42:47.731430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.731964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.731983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.285 [2024-12-09 10:42:47.732519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.285 [2024-12-09 10:42:47.732536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.732748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.732765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d2690 is same with the state(6) to be set 00:32:03.286 [2024-12-09 10:42:47.734171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.286 [2024-12-09 10:42:47.734851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.286 [2024-12-09 10:42:47.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.734883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.734914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.734931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.734946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.734963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.734977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.734998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.287 [2024-12-09 10:42:47.735904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.287 [2024-12-09 10:42:47.735921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.735935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.735951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.735966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.735983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.736227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.736241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0fc0 is same with the state(6) to be set 00:32:03.288 [2024-12-09 10:42:47.737558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.737972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.737995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.288 [2024-12-09 10:42:47.738272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.288 [2024-12-09 10:42:47.738285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.738848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.738863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.753976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.289 [2024-12-09 10:42:47.754196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.289 [2024-12-09 10:42:47.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.754456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.754473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27caac0 is same with the state(6) to be set 00:32:03.290 [2024-12-09 10:42:47.755818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:03.290 [2024-12-09 10:42:47.755863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:32:03.290 [2024-12-09 10:42:47.755887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:32:03.290 [2024-12-09 10:42:47.755907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:32:03.290 [2024-12-09 10:42:47.756554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.290 [2024-12-09 10:42:47.756591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ac910 with addr=10.0.0.2, port=4420 00:32:03.290 [2024-12-09 10:42:47.756611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac910 is same with the state(6) to be set 00:32:03.290 [2024-12-09 10:42:47.756784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.290 [2024-12-09 10:42:47.756811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ac480 with addr=10.0.0.2, port=4420 00:32:03.290 [2024-12-09 10:42:47.756828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac480 is same with the state(6) to be set 00:32:03.290 [2024-12-09 10:42:47.757029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.290 [2024-12-09 10:42:47.757055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a1cd0 with addr=10.0.0.2, port=4420 00:32:03.290 [2024-12-09 10:42:47.757071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a1cd0 is same with the state(6) to be set 00:32:03.290 [2024-12-09 10:42:47.757181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.290 [2024-12-09 10:42:47.757215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270c5a0 with addr=10.0.0.2, port=4420 00:32:03.290 [2024-12-09 10:42:47.757231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c5a0 is same with the state(6) to be set 00:32:03.290 [2024-12-09 10:42:47.758128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.290 [2024-12-09 10:42:47.758687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.290 [2024-12-09 10:42:47.758705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.758973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.291 [2024-12-09 10:42:47.759581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.291 [2024-12-09 10:42:47.759596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.759976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.759991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.760230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b2150 is same with the state(6) to be set 00:32:03.292 [2024-12-09 10:42:47.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.761968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.761985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.762000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.292 [2024-12-09 10:42:47.762017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.292 [2024-12-09 10:42:47.762033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.762981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.762996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.293 [2024-12-09 10:42:47.763395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.293 [2024-12-09 10:42:47.763409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.763439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.763500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.763530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.763566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.763582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b19b0 is same with the state(6) to be set 00:32:03.294 [2024-12-09 10:42:47.764853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.764876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.764898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.764913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.764930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.764944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.764959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.764974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.764990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.765740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.765757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.294 [2024-12-09 10:42:47.780823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.294 [2024-12-09 10:42:47.780857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.780873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.780890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.780906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.780922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.780937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.780954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.780968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.780985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.781750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.781768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b2c70 is same with the state(6) to be set 00:32:03.295 [2024-12-09 10:42:47.783196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.295 [2024-12-09 10:42:47.783592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.295 [2024-12-09 10:42:47.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.783977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.296 [2024-12-09 10:42:47.784888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.296 [2024-12-09 10:42:47.784904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.784935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.784950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.784965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.784979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.784997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.297 [2024-12-09 10:42:47.785269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.297 [2024-12-09 10:42:47.785284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b3f30 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.786792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.786826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.786857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.786880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.786900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.786998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac910 (9): Bad file descriptor 00:32:03.297 [2024-12-09 10:42:47.787027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac480 (9): Bad file descriptor 00:32:03.297 [2024-12-09 10:42:47.787047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a1cd0 (9): Bad file descriptor 00:32:03.297 [2024-12-09 10:42:47.787067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c5a0 (9): Bad file descriptor 00:32:03.297 [2024-12-09 10:42:47.787137] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:32:03.297 [2024-12-09 10:42:47.787164] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:32:03.297 [2024-12-09 10:42:47.787184] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:32:03.297 [2024-12-09 10:42:47.787203] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:32:03.297 [2024-12-09 10:42:47.787221] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:32:03.297 task offset: 20096 on job bdev=Nvme8n1 fails 00:32:03.297 00:32:03.297 Latency(us) 00:32:03.297 [2024-12-09T09:42:47.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.297 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme1n1 ended in about 0.79 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme1n1 : 0.79 167.52 10.47 80.61 0.00 254539.66 20097.71 267192.70 00:32:03.297 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme2n1 ended in about 0.81 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme2n1 : 0.81 157.14 9.82 78.57 0.00 261563.23 20874.43 262532.36 00:32:03.297 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme3n1 ended in about 0.82 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme3n1 : 0.82 156.48 9.78 78.24 0.00 256328.19 30874.74 253211.69 00:32:03.297 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme4n1 ended in about 0.84 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme4n1 : 0.84 156.78 9.80 76.02 0.00 252616.02 19223.89 265639.25 00:32:03.297 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme5n1 ended in about 0.85 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme5n1 : 0.85 151.43 9.46 75.72 0.00 252554.81 34175.81 248551.35 00:32:03.297 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme6n1 ended in about 0.86 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme6n1 : 0.86 148.24 9.27 74.12 0.00 252166.95 20388.98 265639.25 00:32:03.297 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme7n1 ended in about 0.87 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme7n1 : 0.87 147.65 9.23 73.83 0.00 246973.12 22913.33 264085.81 00:32:03.297 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme8n1 ended in about 0.79 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme8n1 : 0.79 162.85 10.18 81.43 0.00 213824.66 3058.35 270299.59 00:32:03.297 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme9n1 ended in about 0.79 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme9n1 : 0.79 159.60 9.98 11.40 0.00 296780.63 8349.77 276513.37 00:32:03.297 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:03.297 Job: Nvme10n1 ended in about 0.84 seconds with error 00:32:03.297 Verification LBA range: start 0x0 length 0x400 00:32:03.297 Nvme10n1 : 0.84 76.53 4.78 76.53 0.00 327939.79 20680.25 301368.51 00:32:03.297 [2024-12-09T09:42:47.951Z] =================================================================================================================== 00:32:03.297 [2024-12-09T09:42:47.951Z] Total : 1484.24 92.76 706.46 0.00 258027.45 3058.35 301368.51 00:32:03.297 [2024-12-09 10:42:47.824624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:03.297 [2024-12-09 10:42:47.824734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:32:03.297 [2024-12-09 10:42:47.825111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.297 [2024-12-09 10:42:47.825160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ef460 with addr=10.0.0.2, port=4420 00:32:03.297 [2024-12-09 10:42:47.825183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ef460 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.825399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.297 [2024-12-09 10:42:47.825425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ef640 with addr=10.0.0.2, port=4420 00:32:03.297 [2024-12-09 10:42:47.825443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ef640 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.825634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.297 [2024-12-09 10:42:47.825673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c9c60 with addr=10.0.0.2, port=4420 00:32:03.297 [2024-12-09 10:42:47.825689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c9c60 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.825846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.297 [2024-12-09 10:42:47.825872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ca8a0 with addr=10.0.0.2, port=4420 00:32:03.297 [2024-12-09 10:42:47.825888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ca8a0 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.826107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.297 [2024-12-09 10:42:47.826134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2214110 with addr=10.0.0.2, port=4420 00:32:03.297 [2024-12-09 10:42:47.826151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214110 is same with the state(6) to be set 00:32:03.297 [2024-12-09 10:42:47.826170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:03.297 [2024-12-09 10:42:47.826184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:03.297 [2024-12-09 10:42:47.826203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:03.297 [2024-12-09 10:42:47.826221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:03.297 [2024-12-09 10:42:47.826238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:32:03.297 [2024-12-09 10:42:47.826251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:32:03.297 [2024-12-09 10:42:47.826266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:32:03.297 [2024-12-09 10:42:47.826279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.826294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.826307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.826320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.826334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.826349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.826362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.826375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.826388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.827872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.298 [2024-12-09 10:42:47.827904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d4640 with addr=10.0.0.2, port=4420 00:32:03.298 [2024-12-09 10:42:47.827921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d4640 is same with the state(6) to be set 00:32:03.298 [2024-12-09 10:42:47.827949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef460 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.827980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ef640 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.827999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c9c60 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.828018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ca8a0 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.828037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214110 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.828139] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:32:03.298 [2024-12-09 10:42:47.828164] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:32:03.298 [2024-12-09 10:42:47.828185] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:32:03.298 [2024-12-09 10:42:47.828205] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:32:03.298 [2024-12-09 10:42:47.828224] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:32:03.298 [2024-12-09 10:42:47.828336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d4640 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.828363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.828421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.828475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.828530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.828583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.828716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:32:03.298 [2024-12-09 10:42:47.828748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:32:03.298 [2024-12-09 10:42:47.828767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:32:03.298 [2024-12-09 10:42:47.828784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:03.298 [2024-12-09 10:42:47.828829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.828847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.828861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.828875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.829097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.298 [2024-12-09 10:42:47.829125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270c5a0 with addr=10.0.0.2, port=4420 00:32:03.298 [2024-12-09 10:42:47.829143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270c5a0 is same with the state(6) to be set 00:32:03.298 [2024-12-09 10:42:47.829331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.298 [2024-12-09 10:42:47.829356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a1cd0 with addr=10.0.0.2, port=4420 00:32:03.298 [2024-12-09 10:42:47.829373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a1cd0 is same with the state(6) to be set 00:32:03.298 [2024-12-09 10:42:47.829521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.298 [2024-12-09 10:42:47.829545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ac480 with addr=10.0.0.2, port=4420 00:32:03.298 [2024-12-09 10:42:47.829562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac480 is same with the state(6) to be set 00:32:03.298 [2024-12-09 10:42:47.829709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:03.298 [2024-12-09 10:42:47.829747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ac910 with addr=10.0.0.2, port=4420 00:32:03.298 [2024-12-09 10:42:47.829763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac910 is same with the state(6) to be set 00:32:03.298 [2024-12-09 10:42:47.829808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270c5a0 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.829832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a1cd0 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.829852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac480 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.829870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ac910 (9): Bad file descriptor 00:32:03.298 [2024-12-09 10:42:47.829912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.829931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.829944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.829963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.829979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.829993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.830006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.830019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.830033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.830046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.830060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.830073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:32:03.298 [2024-12-09 10:42:47.830088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:32:03.298 [2024-12-09 10:42:47.830102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:32:03.298 [2024-12-09 10:42:47.830115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:03.298 [2024-12-09 10:42:47.830127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:32:03.870 10:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2171066 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2171066 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2171066 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.809 rmmod nvme_tcp 00:32:04.809 rmmod nvme_fabrics 00:32:04.809 rmmod nvme_keyring 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2170889 ']' 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2170889 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2170889 ']' 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2170889 00:32:04.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2170889) - No such process 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2170889 is not found' 00:32:04.809 Process with pid 2170889 is not found 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.809 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.810 10:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.342 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:07.342 00:32:07.342 real 0m7.886s 00:32:07.342 user 0m19.300s 00:32:07.342 sys 0m1.679s 00:32:07.342 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:07.342 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:07.342 ************************************ 00:32:07.342 END TEST nvmf_shutdown_tc3 00:32:07.342 ************************************ 00:32:07.342 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:32:07.342 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:07.343 ************************************ 00:32:07.343 START TEST nvmf_shutdown_tc4 00:32:07.343 ************************************ 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:07.343 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:07.343 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:07.343 Found net devices under 0000:84:00.0: cvl_0_0 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:07.343 Found net devices under 0000:84:00.1: cvl_0_1 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.343 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:32:07.344 00:32:07.344 --- 10.0.0.2 ping statistics --- 00:32:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.344 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:32:07.344 00:32:07.344 --- 10.0.0.1 ping statistics --- 00:32:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.344 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2171970 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2171970 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2171970 ']' 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.344 10:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.344 [2024-12-09 10:42:51.893291] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:07.344 [2024-12-09 10:42:51.893397] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.604 [2024-12-09 10:42:52.037624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.604 [2024-12-09 10:42:52.161836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.604 [2024-12-09 10:42:52.161941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.604 [2024-12-09 10:42:52.161978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.604 [2024-12-09 10:42:52.162008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.604 [2024-12-09 10:42:52.162033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.604 [2024-12-09 10:42:52.165538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.604 [2024-12-09 10:42:52.165632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.604 [2024-12-09 10:42:52.165686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:07.604 [2024-12-09 10:42:52.165689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 [2024-12-09 10:42:52.365842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.863 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 Malloc1 00:32:07.863 [2024-12-09 10:42:52.473048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.863 Malloc2 00:32:08.122 Malloc3 00:32:08.122 Malloc4 00:32:08.122 Malloc5 00:32:08.122 Malloc6 00:32:08.122 Malloc7 00:32:08.380 Malloc8 00:32:08.380 Malloc9 00:32:08.380 Malloc10 00:32:08.380 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.380 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:08.380 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.380 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:08.380 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2172149 00:32:08.381 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:32:08.381 10:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:32:08.381 [2024-12-09 10:42:53.024810] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2171970 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2171970 ']' 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2171970 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.706 10:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171970 00:32:13.706 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:13.706 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:13.706 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171970' 00:32:13.706 killing process with pid 2171970 00:32:13.706 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2171970 00:32:13.706 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2171970 00:32:13.706 [2024-12-09 10:42:58.025918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf00f0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.026965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.027111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf05c0 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 [2024-12-09 10:42:58.028167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0a90 is same with the state(6) to be set 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 [2024-12-09 10:42:58.030509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.706 starting I/O failed: -6 00:32:13.706 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 [2024-12-09 10:42:58.031713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 [2024-12-09 10:42:58.032971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 starting I/O failed: -6 00:32:13.707 [2024-12-09 10:42:58.033909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with tWrite completed with error (sct=0, sc=8) 00:32:13.707 he state(6) to be set 00:32:13.707 starting I/O failed: -6 00:32:13.707 [2024-12-09 10:42:58.033946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with tWrite completed with error (sct=0, sc=8) 00:32:13.707 he state(6) to be set 00:32:13.707 starting I/O failed: -6 00:32:13.707 [2024-12-09 10:42:58.033964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with the state(6) to be set 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 [2024-12-09 10:42:58.033978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with the state(6) to be set 00:32:13.707 starting I/O failed: -6 00:32:13.707 [2024-12-09 10:42:58.033990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with the state(6) to be set 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.707 [2024-12-09 10:42:58.034002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda41f0 is same with the state(6) to be set 00:32:13.707 starting I/O failed: -6 00:32:13.707 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.034350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with tstarting I/O failed: -6 00:32:13.708 he state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.034384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with tstarting I/O failed: -6 00:32:13.708 he state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.034414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.034439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda46e0 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.034806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.708 [2024-12-09 10:42:58.034877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 NVMe io qpair process completion error 00:32:13.708 [2024-12-09 10:42:58.034889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.034902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4bb0 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.035768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.035793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.035808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.035821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.035833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.035845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.035857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 [2024-12-09 10:42:58.035868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb34870 is same with the state(6) to be set 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 [2024-12-09 10:42:58.036025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.708 starting I/O failed: -6 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 [2024-12-09 10:42:58.037076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.708 starting I/O failed: -6 00:32:13.708 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 [2024-12-09 10:42:58.038388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 [2024-12-09 10:42:58.040733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.709 NVMe io qpair process completion error 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 Write completed with error (sct=0, sc=8) 00:32:13.709 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 [2024-12-09 10:42:58.042171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.710 starting I/O failed: -6 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 [2024-12-09 10:42:58.043330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 [2024-12-09 10:42:58.044627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.710 Write completed with error (sct=0, sc=8) 00:32:13.710 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 [2024-12-09 10:42:58.046761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.711 NVMe io qpair process completion error 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 [2024-12-09 10:42:58.048068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.711 Write completed with error (sct=0, sc=8) 00:32:13.711 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 [2024-12-09 10:42:58.049223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 [2024-12-09 10:42:58.050552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.712 Write completed with error (sct=0, sc=8) 00:32:13.712 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 [2024-12-09 10:42:58.053251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.713 NVMe io qpair process completion error 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 [2024-12-09 10:42:58.054669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 [2024-12-09 10:42:58.055868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.713 Write completed with error (sct=0, sc=8) 00:32:13.713 starting I/O failed: -6 00:32:13.714 [2024-12-09 10:42:58.057106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 [2024-12-09 10:42:58.060899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.714 NVMe io qpair process completion error 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 [2024-12-09 10:42:58.062376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 starting I/O failed: -6 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.714 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 [2024-12-09 10:42:58.063521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 [2024-12-09 10:42:58.064791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.715 starting I/O failed: -6 00:32:13.715 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 [2024-12-09 10:42:58.067993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.716 NVMe io qpair process completion error 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 [2024-12-09 10:42:58.069435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 [2024-12-09 10:42:58.070624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.716 starting I/O failed: -6 00:32:13.716 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 [2024-12-09 10:42:58.071914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 [2024-12-09 10:42:58.074235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.717 NVMe io qpair process completion error 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 [2024-12-09 10:42:58.075487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 Write completed with error (sct=0, sc=8) 00:32:13.717 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 [2024-12-09 10:42:58.076624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 [2024-12-09 10:42:58.077961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.718 Write completed with error (sct=0, sc=8) 00:32:13.718 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 [2024-12-09 10:42:58.081562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.719 NVMe io qpair process completion error 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 [2024-12-09 10:42:58.082992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 [2024-12-09 10:42:58.084227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 starting I/O failed: -6 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.719 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 [2024-12-09 10:42:58.085479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 [2024-12-09 10:42:58.088441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.720 NVMe io qpair process completion error 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 Write completed with error (sct=0, sc=8) 00:32:13.720 starting I/O failed: -6 00:32:13.721 [2024-12-09 10:42:58.089904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 [2024-12-09 10:42:58.091052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 [2024-12-09 10:42:58.092318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.721 starting I/O failed: -6 00:32:13.721 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 Write completed with error (sct=0, sc=8) 00:32:13.722 starting I/O failed: -6 00:32:13.722 [2024-12-09 10:42:58.095252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:13.722 NVMe io qpair process completion error 00:32:13.722 Initializing NVMe Controllers 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:32:13.722 Controller IO queue size 128, less than required. 00:32:13.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:32:13.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:32:13.722 Initialization complete. Launching workers. 00:32:13.722 ======================================================== 00:32:13.722 Latency(us) 00:32:13.722 Device Information : IOPS MiB/s Average min max 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1641.90 70.55 77969.28 956.24 147847.25 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1694.18 72.80 75597.28 930.05 151279.17 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1695.23 72.84 74536.64 970.18 137159.85 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1719.59 73.89 73494.25 790.16 136226.70 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1696.28 72.89 74527.15 887.46 128818.58 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1711.61 73.55 73880.47 939.61 133330.91 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1655.97 71.15 76388.65 1231.17 132446.37 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1671.08 71.80 75744.65 965.83 135914.04 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1634.13 70.22 77495.84 923.00 139868.81 00:32:13.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1665.21 71.55 76070.73 1195.27 142953.56 00:32:13.722 ======================================================== 00:32:13.722 Total : 16785.17 721.24 75548.08 790.16 151279.17 00:32:13.722 00:32:13.722 [2024-12-09 10:42:58.099072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff5f0 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff2c0 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200ae0 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200720 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200900 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed10 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffc50 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ff920 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fe9e0 is same with the state(6) to be set 00:32:13.722 [2024-12-09 10:42:58.099646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fe6b0 is same with the state(6) to be set 00:32:13.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:13.980 10:42:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2172149 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2172149 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2172149 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.368 rmmod nvme_tcp 00:32:15.368 rmmod nvme_fabrics 00:32:15.368 rmmod nvme_keyring 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2171970 ']' 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2171970 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2171970 ']' 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2171970 00:32:15.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2171970) - No such process 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2171970 is not found' 00:32:15.368 Process with pid 2171970 is not found 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.368 10:42:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.267 00:32:17.267 real 0m10.167s 00:32:17.267 user 0m25.013s 00:32:17.267 sys 0m6.224s 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:17.267 ************************************ 00:32:17.267 END TEST nvmf_shutdown_tc4 00:32:17.267 ************************************ 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:32:17.267 00:32:17.267 real 0m40.992s 00:32:17.267 user 1m50.068s 00:32:17.267 sys 0m14.426s 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.267 ************************************ 00:32:17.267 END TEST nvmf_shutdown 00:32:17.267 ************************************ 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:17.267 ************************************ 00:32:17.267 START TEST nvmf_nsid 00:32:17.267 ************************************ 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:32:17.267 * Looking for test storage... 00:32:17.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:32:17.267 10:43:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.525 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:17.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.526 --rc genhtml_branch_coverage=1 00:32:17.526 --rc genhtml_function_coverage=1 00:32:17.526 --rc genhtml_legend=1 00:32:17.526 --rc geninfo_all_blocks=1 00:32:17.526 --rc geninfo_unexecuted_blocks=1 00:32:17.526 00:32:17.526 ' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:17.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.526 --rc genhtml_branch_coverage=1 00:32:17.526 --rc genhtml_function_coverage=1 00:32:17.526 --rc genhtml_legend=1 00:32:17.526 --rc geninfo_all_blocks=1 00:32:17.526 --rc geninfo_unexecuted_blocks=1 00:32:17.526 00:32:17.526 ' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:17.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.526 --rc genhtml_branch_coverage=1 00:32:17.526 --rc genhtml_function_coverage=1 00:32:17.526 --rc genhtml_legend=1 00:32:17.526 --rc geninfo_all_blocks=1 00:32:17.526 --rc geninfo_unexecuted_blocks=1 00:32:17.526 00:32:17.526 ' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:17.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.526 --rc genhtml_branch_coverage=1 00:32:17.526 --rc genhtml_function_coverage=1 00:32:17.526 --rc genhtml_legend=1 00:32:17.526 --rc geninfo_all_blocks=1 00:32:17.526 --rc geninfo_unexecuted_blocks=1 00:32:17.526 00:32:17.526 ' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:17.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.526 10:43:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:20.818 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:20.818 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:20.818 Found net devices under 0000:84:00.0: cvl_0_0 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:20.818 Found net devices under 0000:84:00.1: cvl_0_1 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.818 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:20.819 00:32:20.819 --- 10.0.0.2 ping statistics --- 00:32:20.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.819 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:32:20.819 00:32:20.819 --- 10.0.0.1 ping statistics --- 00:32:20.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.819 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2174934 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2174934 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2174934 ']' 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.819 10:43:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:21.078 [2024-12-09 10:43:05.516811] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:21.078 [2024-12-09 10:43:05.516980] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.078 [2024-12-09 10:43:05.702873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.336 [2024-12-09 10:43:05.820565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.336 [2024-12-09 10:43:05.820684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.336 [2024-12-09 10:43:05.820736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.336 [2024-12-09 10:43:05.820771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.336 [2024-12-09 10:43:05.820796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.336 [2024-12-09 10:43:05.821806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2175054 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=10be27ff-6c60-45d7-9400-791e90ed49c3 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b245f8ac-0761-4224-8520-0da9198e17db 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6653994c-2823-41ca-8b18-ba0e6a71a394 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:21.596 null0 00:32:21.596 null1 00:32:21.596 null2 00:32:21.596 [2024-12-09 10:43:06.186011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:21.596 [2024-12-09 10:43:06.210374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.596 [2024-12-09 10:43:06.229292] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:21.596 [2024-12-09 10:43:06.229391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175054 ] 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2175054 /var/tmp/tgt2.sock 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2175054 ']' 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:32:21.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.596 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:21.855 [2024-12-09 10:43:06.367995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.855 [2024-12-09 10:43:06.489378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.423 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.423 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:32:22.423 10:43:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:32:22.990 [2024-12-09 10:43:07.448897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.990 [2024-12-09 10:43:07.466350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:32:22.990 nvme0n1 nvme0n2 00:32:22.990 nvme1n1 00:32:22.990 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:32:22.990 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:32:22.990 10:43:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:32:23.558 10:43:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 10be27ff-6c60-45d7-9400-791e90ed49c3 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:32:24.502 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=10be27ff6c6045d79400791e90ed49c3 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 10BE27FF6C6045D79400791E90ED49C3 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 10BE27FF6C6045D79400791E90ED49C3 == \1\0\B\E\2\7\F\F\6\C\6\0\4\5\D\7\9\4\0\0\7\9\1\E\9\0\E\D\4\9\C\3 ]] 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:24.760 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b245f8ac-0761-4224-8520-0da9198e17db 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b245f8ac0761422485200da9198e17db 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B245F8AC0761422485200DA9198E17DB 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B245F8AC0761422485200DA9198E17DB == \B\2\4\5\F\8\A\C\0\7\6\1\4\2\2\4\8\5\2\0\0\D\A\9\1\9\8\E\1\7\D\B ]] 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6653994c-2823-41ca-8b18-ba0e6a71a394 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6653994c282341ca8b18ba0e6a71a394 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6653994C282341CA8B18BA0E6A71A394 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6653994C282341CA8B18BA0E6A71A394 == \6\6\5\3\9\9\4\C\2\8\2\3\4\1\C\A\8\B\1\8\B\A\0\E\6\A\7\1\A\3\9\4 ]] 00:32:24.761 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2175054 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2175054 ']' 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2175054 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175054 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175054' 00:32:25.019 killing process with pid 2175054 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2175054 00:32:25.019 10:43:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2175054 00:32:25.953 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.954 rmmod nvme_tcp 00:32:25.954 rmmod nvme_fabrics 00:32:25.954 rmmod nvme_keyring 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2174934 ']' 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2174934 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2174934 ']' 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2174934 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174934 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174934' 00:32:25.954 killing process with pid 2174934 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2174934 00:32:25.954 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2174934 00:32:26.212 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.212 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.212 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.212 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:32:26.212 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.470 10:43:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.410 00:32:28.410 real 0m11.093s 00:32:28.410 user 0m11.192s 00:32:28.410 sys 0m4.248s 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:32:28.410 ************************************ 00:32:28.410 END TEST nvmf_nsid 00:32:28.410 ************************************ 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:28.410 00:32:28.410 real 16m1.679s 00:32:28.410 user 36m59.270s 00:32:28.410 sys 3m34.897s 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.410 10:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:28.410 ************************************ 00:32:28.410 END TEST nvmf_target_extra 00:32:28.410 ************************************ 00:32:28.410 10:43:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:28.410 10:43:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:28.410 10:43:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.410 10:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:28.410 ************************************ 00:32:28.410 START TEST nvmf_host 00:32:28.410 ************************************ 00:32:28.410 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:32:28.668 * Looking for test storage... 00:32:28.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:28.668 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:28.668 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:28.668 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.927 --rc genhtml_branch_coverage=1 00:32:28.927 --rc genhtml_function_coverage=1 00:32:28.927 --rc genhtml_legend=1 00:32:28.927 --rc geninfo_all_blocks=1 00:32:28.927 --rc geninfo_unexecuted_blocks=1 00:32:28.927 00:32:28.927 ' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.927 --rc genhtml_branch_coverage=1 00:32:28.927 --rc genhtml_function_coverage=1 00:32:28.927 --rc genhtml_legend=1 00:32:28.927 --rc geninfo_all_blocks=1 00:32:28.927 --rc geninfo_unexecuted_blocks=1 00:32:28.927 00:32:28.927 ' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.927 --rc genhtml_branch_coverage=1 00:32:28.927 --rc genhtml_function_coverage=1 00:32:28.927 --rc genhtml_legend=1 00:32:28.927 --rc geninfo_all_blocks=1 00:32:28.927 --rc geninfo_unexecuted_blocks=1 00:32:28.927 00:32:28.927 ' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:28.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.927 --rc genhtml_branch_coverage=1 00:32:28.927 --rc genhtml_function_coverage=1 00:32:28.927 --rc genhtml_legend=1 00:32:28.927 --rc geninfo_all_blocks=1 00:32:28.927 --rc geninfo_unexecuted_blocks=1 00:32:28.927 00:32:28.927 ' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:28.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:32:28.927 10:43:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.928 ************************************ 00:32:28.928 START TEST nvmf_multicontroller 00:32:28.928 ************************************ 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:32:28.928 * Looking for test storage... 00:32:28.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:32:28.928 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:29.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.189 --rc genhtml_branch_coverage=1 00:32:29.189 --rc genhtml_function_coverage=1 00:32:29.189 --rc genhtml_legend=1 00:32:29.189 --rc geninfo_all_blocks=1 00:32:29.189 --rc geninfo_unexecuted_blocks=1 00:32:29.189 00:32:29.189 ' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:29.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.189 --rc genhtml_branch_coverage=1 00:32:29.189 --rc genhtml_function_coverage=1 00:32:29.189 --rc genhtml_legend=1 00:32:29.189 --rc geninfo_all_blocks=1 00:32:29.189 --rc geninfo_unexecuted_blocks=1 00:32:29.189 00:32:29.189 ' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:29.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.189 --rc genhtml_branch_coverage=1 00:32:29.189 --rc genhtml_function_coverage=1 00:32:29.189 --rc genhtml_legend=1 00:32:29.189 --rc geninfo_all_blocks=1 00:32:29.189 --rc geninfo_unexecuted_blocks=1 00:32:29.189 00:32:29.189 ' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:29.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.189 --rc genhtml_branch_coverage=1 00:32:29.189 --rc genhtml_function_coverage=1 00:32:29.189 --rc genhtml_legend=1 00:32:29.189 --rc geninfo_all_blocks=1 00:32:29.189 --rc geninfo_unexecuted_blocks=1 00:32:29.189 00:32:29.189 ' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:32:29.189 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:29.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:32:29.190 10:43:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:32:32.509 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:32.510 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:32.510 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:32.510 Found net devices under 0000:84:00.0: cvl_0_0 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:32.510 Found net devices under 0000:84:00.1: cvl_0_1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:32:32.510 00:32:32.510 --- 10.0.0.2 ping statistics --- 00:32:32.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.510 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:32:32.510 00:32:32.510 --- 10.0.0.1 ping statistics --- 00:32:32.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.510 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.510 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.511 10:43:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2177775 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2177775 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2177775 ']' 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.511 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:32.511 [2024-12-09 10:43:17.071180] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:32.511 [2024-12-09 10:43:17.071287] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.770 [2024-12-09 10:43:17.220261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:32.770 [2024-12-09 10:43:17.337609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.770 [2024-12-09 10:43:17.337707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.770 [2024-12-09 10:43:17.337761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.770 [2024-12-09 10:43:17.337791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.770 [2024-12-09 10:43:17.337816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.770 [2024-12-09 10:43:17.340973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.770 [2024-12-09 10:43:17.341085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.770 [2024-12-09 10:43:17.341080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 [2024-12-09 10:43:17.536957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 Malloc0 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 [2024-12-09 10:43:17.597922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 [2024-12-09 10:43:17.605798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.029 Malloc1 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:33.029 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2177923 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2177923 /var/tmp/bdevperf.sock 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2177923 ']' 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.030 10:43:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.598 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.598 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:32:33.598 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:33.598 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.598 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 NVMe0n1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.864 1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 request: 00:32:33.864 { 00:32:33.864 "name": "NVMe0", 00:32:33.864 "trtype": "tcp", 00:32:33.864 "traddr": "10.0.0.2", 00:32:33.864 "adrfam": "ipv4", 00:32:33.864 "trsvcid": "4420", 00:32:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.864 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:33.864 "hostaddr": "10.0.0.1", 00:32:33.864 "prchk_reftag": false, 00:32:33.864 "prchk_guard": false, 00:32:33.864 "hdgst": false, 00:32:33.864 "ddgst": false, 00:32:33.864 "allow_unrecognized_csi": false, 00:32:33.864 "method": "bdev_nvme_attach_controller", 00:32:33.864 "req_id": 1 00:32:33.864 } 00:32:33.864 Got JSON-RPC error response 00:32:33.864 response: 00:32:33.864 { 00:32:33.864 "code": -114, 00:32:33.864 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:33.864 } 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 request: 00:32:33.864 { 00:32:33.864 "name": "NVMe0", 00:32:33.864 "trtype": "tcp", 00:32:33.864 "traddr": "10.0.0.2", 00:32:33.864 "adrfam": "ipv4", 00:32:33.864 "trsvcid": "4420", 00:32:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:33.864 "hostaddr": "10.0.0.1", 00:32:33.864 "prchk_reftag": false, 00:32:33.864 "prchk_guard": false, 00:32:33.864 "hdgst": false, 00:32:33.864 "ddgst": false, 00:32:33.864 "allow_unrecognized_csi": false, 00:32:33.864 "method": "bdev_nvme_attach_controller", 00:32:33.864 "req_id": 1 00:32:33.864 } 00:32:33.864 Got JSON-RPC error response 00:32:33.864 response: 00:32:33.864 { 00:32:33.864 "code": -114, 00:32:33.864 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:33.864 } 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 request: 00:32:33.864 { 00:32:33.864 "name": "NVMe0", 00:32:33.864 "trtype": "tcp", 00:32:33.864 "traddr": "10.0.0.2", 00:32:33.864 "adrfam": "ipv4", 00:32:33.864 "trsvcid": "4420", 00:32:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.864 "hostaddr": "10.0.0.1", 00:32:33.864 "prchk_reftag": false, 00:32:33.864 "prchk_guard": false, 00:32:33.864 "hdgst": false, 00:32:33.864 "ddgst": false, 00:32:33.864 "multipath": "disable", 00:32:33.864 "allow_unrecognized_csi": false, 00:32:33.864 "method": "bdev_nvme_attach_controller", 00:32:33.864 "req_id": 1 00:32:33.864 } 00:32:33.864 Got JSON-RPC error response 00:32:33.864 response: 00:32:33.864 { 00:32:33.864 "code": -114, 00:32:33.864 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:32:33.864 } 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:33.864 request: 00:32:33.864 { 00:32:33.864 "name": "NVMe0", 00:32:33.864 "trtype": "tcp", 00:32:33.864 "traddr": "10.0.0.2", 00:32:33.864 "adrfam": "ipv4", 00:32:33.864 "trsvcid": "4420", 00:32:33.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.864 "hostaddr": "10.0.0.1", 00:32:33.864 "prchk_reftag": false, 00:32:33.864 "prchk_guard": false, 00:32:33.864 "hdgst": false, 00:32:33.864 "ddgst": false, 00:32:33.864 "multipath": "failover", 00:32:33.864 "allow_unrecognized_csi": false, 00:32:33.864 "method": "bdev_nvme_attach_controller", 00:32:33.864 "req_id": 1 00:32:33.864 } 00:32:33.864 Got JSON-RPC error response 00:32:33.864 response: 00:32:33.864 { 00:32:33.864 "code": -114, 00:32:33.864 "message": "A controller named NVMe0 already exists with the specified network path" 00:32:33.864 } 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.864 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:34.143 NVMe0n1 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:34.143 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:32:34.143 10:43:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.519 { 00:32:35.519 "results": [ 00:32:35.519 { 00:32:35.519 "job": "NVMe0n1", 00:32:35.519 "core_mask": "0x1", 00:32:35.519 "workload": "write", 00:32:35.519 "status": "finished", 00:32:35.519 "queue_depth": 128, 00:32:35.519 "io_size": 4096, 00:32:35.519 "runtime": 1.004297, 00:32:35.519 "iops": 18664.797365719503, 00:32:35.519 "mibps": 72.90936470984181, 00:32:35.519 "io_failed": 0, 00:32:35.519 "io_timeout": 0, 00:32:35.519 "avg_latency_us": 6847.157875502604, 00:32:35.519 "min_latency_us": 4126.34074074074, 00:32:35.519 "max_latency_us": 12281.931851851852 00:32:35.519 } 00:32:35.519 ], 00:32:35.519 "core_count": 1 00:32:35.519 } 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2177923 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2177923 ']' 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2177923 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177923 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177923' 00:32:35.519 killing process with pid 2177923 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2177923 00:32:35.519 10:43:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2177923 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:32:35.519 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:32:35.519 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:35.519 [2024-12-09 10:43:17.721777] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:32:35.519 [2024-12-09 10:43:17.721888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177923 ] 00:32:35.519 [2024-12-09 10:43:17.800127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.519 [2024-12-09 10:43:17.860374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.519 [2024-12-09 10:43:18.648306] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 792990bc-83ad-4247-b68f-c0d13b17aebf already exists 00:32:35.519 [2024-12-09 10:43:18.648345] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:792990bc-83ad-4247-b68f-c0d13b17aebf alias for bdev NVMe1n1 00:32:35.519 [2024-12-09 10:43:18.648366] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:35.519 Running I/O for 1 seconds... 00:32:35.520 18617.00 IOPS, 72.72 MiB/s 00:32:35.520 Latency(us) 00:32:35.520 [2024-12-09T09:43:20.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.520 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:35.520 NVMe0n1 : 1.00 18664.80 72.91 0.00 0.00 6847.16 4126.34 12281.93 00:32:35.520 [2024-12-09T09:43:20.174Z] =================================================================================================================== 00:32:35.520 [2024-12-09T09:43:20.174Z] Total : 18664.80 72.91 0.00 0.00 6847.16 4126.34 12281.93 00:32:35.520 Received shutdown signal, test time was about 1.000000 seconds 00:32:35.520 00:32:35.520 Latency(us) 00:32:35.520 [2024-12-09T09:43:20.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.520 [2024-12-09T09:43:20.174Z] =================================================================================================================== 00:32:35.520 [2024-12-09T09:43:20.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:35.520 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:35.520 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:35.520 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:32:35.520 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:32:35.520 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:35.520 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:34:04.479 Resuming build at Mon Dec 09 09:44:49 UTC 2024 after Jenkins restart 00:34:08.867 Waiting for reconnection of GP8 before proceeding with build 00:34:09.170 Timeout set to expire in 30 min 00:34:09.191 Ready to run at Mon Dec 09 09:44:53 UTC 2024 00:34:09.533 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.533 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:34:09.533 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.534 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.534 rmmod nvme_tcp 00:34:09.534 rmmod nvme_fabrics 00:34:09.534 rmmod nvme_keyring 00:34:09.534 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.534 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:34:09.535 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:34:09.535 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2177775 ']' 00:34:09.535 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2177775 00:34:09.536 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2177775 ']' 00:34:09.536 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2177775 00:34:09.536 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:34:09.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177775 00:34:09.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:09.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:09.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177775' 00:34:09.539 killing process with pid 2177775 00:34:09.539 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2177775 00:34:09.539 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2177775 00:34:09.540 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.540 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.540 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.540 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:34:09.541 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:34:09.541 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.541 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.542 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.542 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.544 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.544 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.544 10:43:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.544 10:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.545 00:34:09.545 real 0m9.388s 00:34:09.545 user 0m13.877s 00:34:09.545 sys 0m3.507s 00:34:09.545 10:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.545 10:43:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:09.545 ************************************ 00:34:09.546 END TEST nvmf_multicontroller 00:34:09.546 ************************************ 00:34:09.546 10:43:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:34:09.546 10:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.547 10:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.547 10:43:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.547 ************************************ 00:34:09.547 START TEST nvmf_aer 00:34:09.547 ************************************ 00:34:09.548 10:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:34:09.548 * Looking for test storage... 00:34:09.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.548 10:43:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.548 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.549 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.549 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.549 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.550 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.550 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.550 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.550 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.551 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.551 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.551 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.552 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.552 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.552 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.552 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:34:09.553 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:34:09.553 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.553 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.554 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:34:09.554 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:34:09.554 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.554 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:34:09.555 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.555 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:34:09.555 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:34:09.555 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.556 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:34:09.556 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.556 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.557 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.557 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:34:09.557 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.558 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.558 --rc genhtml_branch_coverage=1 00:34:09.558 --rc genhtml_function_coverage=1 00:34:09.558 --rc genhtml_legend=1 00:34:09.558 --rc geninfo_all_blocks=1 00:34:09.558 --rc geninfo_unexecuted_blocks=1 00:34:09.558 00:34:09.559 ' 00:34:09.559 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.559 --rc genhtml_branch_coverage=1 00:34:09.559 --rc genhtml_function_coverage=1 00:34:09.559 --rc genhtml_legend=1 00:34:09.559 --rc geninfo_all_blocks=1 00:34:09.560 --rc geninfo_unexecuted_blocks=1 00:34:09.560 00:34:09.560 ' 00:34:09.560 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.560 --rc genhtml_branch_coverage=1 00:34:09.560 --rc genhtml_function_coverage=1 00:34:09.561 --rc genhtml_legend=1 00:34:09.561 --rc geninfo_all_blocks=1 00:34:09.561 --rc geninfo_unexecuted_blocks=1 00:34:09.561 00:34:09.561 ' 00:34:09.561 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.562 --rc genhtml_branch_coverage=1 00:34:09.562 --rc genhtml_function_coverage=1 00:34:09.562 --rc genhtml_legend=1 00:34:09.562 --rc geninfo_all_blocks=1 00:34:09.562 --rc geninfo_unexecuted_blocks=1 00:34:09.562 00:34:09.562 ' 00:34:09.563 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.563 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:34:09.563 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.564 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.564 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.564 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.565 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.565 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.565 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.566 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.566 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.567 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.567 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.567 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.568 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.568 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.569 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.569 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.569 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.570 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.570 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.571 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.571 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.573 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.576 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.578 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.578 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:34:09.582 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.582 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:34:09.583 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.583 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.583 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.584 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.584 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.584 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.585 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.585 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.586 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.586 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:34:09.586 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.587 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.587 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.587 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.588 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.588 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.588 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.589 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.589 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.589 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.590 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.590 10:43:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.590 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.591 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.591 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.591 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.592 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.592 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.592 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.593 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.593 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.593 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:34:09.593 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.594 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:34:09.594 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.594 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:34:09.594 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.595 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.595 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.595 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.596 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.596 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.597 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.597 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.597 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.598 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.598 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.598 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.599 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.599 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.599 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.599 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.600 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.600 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.600 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.601 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.601 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:09.601 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:09.601 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.602 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.602 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.602 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.602 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.603 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.603 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:09.603 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:09.603 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.604 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.604 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.604 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.605 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.605 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.605 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.605 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.606 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.606 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.607 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.608 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.609 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.609 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.610 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.610 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:09.610 Found net devices under 0000:84:00.0: cvl_0_0 00:34:09.611 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.611 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.611 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.611 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.612 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.612 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.612 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.613 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.613 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:09.613 Found net devices under 0000:84:00.1: cvl_0_1 00:34:09.614 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.614 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.614 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.614 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.615 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.615 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.615 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.615 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.616 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.616 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.616 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.617 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.617 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.617 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.618 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.618 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.619 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.619 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.619 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.620 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.620 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.620 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.621 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.621 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.622 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.622 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.623 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.623 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.624 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:34:09.624 00:34:09.624 --- 10.0.0.2 ping statistics --- 00:34:09.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.625 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:34:09.625 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:34:09.626 00:34:09.626 --- 10.0.0.1 ping statistics --- 00:34:09.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.626 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:34:09.627 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.627 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:34:09.627 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:09.628 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.628 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:09.628 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:09.629 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.629 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:09.629 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:09.630 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:34:09.630 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:09.630 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:09.631 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.631 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2180289 00:34:09.632 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:09.632 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2180289 00:34:09.632 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2180289 ']' 00:34:09.633 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.633 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.634 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.634 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.635 10:43:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.635 [2024-12-09 10:43:26.697540] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:09.636 [2024-12-09 10:43:26.697638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.637 [2024-12-09 10:43:26.840225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:09.637 [2024-12-09 10:43:26.963641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.638 [2024-12-09 10:43:26.963773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.638 [2024-12-09 10:43:26.963817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.638 [2024-12-09 10:43:26.963850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.639 [2024-12-09 10:43:26.963879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.639 [2024-12-09 10:43:26.967422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.640 [2024-12-09 10:43:26.967521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.640 [2024-12-09 10:43:26.967618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.640 [2024-12-09 10:43:26.967622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.641 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.641 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:34:09.641 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:09.642 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:09.642 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.643 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.643 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:09.643 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.644 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.644 [2024-12-09 10:43:27.132936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.644 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.645 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:34:09.645 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.645 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.646 Malloc0 00:34:09.646 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.646 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:34:09.647 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.647 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.647 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.648 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:09.648 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.649 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.649 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.649 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.650 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.650 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.651 [2024-12-09 10:43:27.203316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.651 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.651 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:34:09.652 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.652 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.652 [ 00:34:09.652 { 00:34:09.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:09.652 "subtype": "Discovery", 00:34:09.652 "listen_addresses": [], 00:34:09.652 "allow_any_host": true, 00:34:09.652 "hosts": [] 00:34:09.652 }, 00:34:09.652 { 00:34:09.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.653 "subtype": "NVMe", 00:34:09.653 "listen_addresses": [ 00:34:09.653 { 00:34:09.653 "trtype": "TCP", 00:34:09.653 "adrfam": "IPv4", 00:34:09.653 "traddr": "10.0.0.2", 00:34:09.653 "trsvcid": "4420" 00:34:09.653 } 00:34:09.653 ], 00:34:09.653 "allow_any_host": true, 00:34:09.653 "hosts": [], 00:34:09.653 "serial_number": "SPDK00000000000001", 00:34:09.653 "model_number": "SPDK bdev Controller", 00:34:09.654 "max_namespaces": 2, 00:34:09.654 "min_cntlid": 1, 00:34:09.654 "max_cntlid": 65519, 00:34:09.654 "namespaces": [ 00:34:09.654 { 00:34:09.654 "nsid": 1, 00:34:09.654 "bdev_name": "Malloc0", 00:34:09.654 "name": "Malloc0", 00:34:09.654 "nguid": "B2F71CCF8F87420D8497F9B8AF84DFBA", 00:34:09.654 "uuid": "b2f71ccf-8f87-420d-8497-f9b8af84dfba" 00:34:09.654 } 00:34:09.654 ] 00:34:09.654 } 00:34:09.654 ] 00:34:09.655 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.655 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:09.656 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:34:09.656 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2180432 00:34:09.656 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:34:09.657 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:34:09.658 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:34:09.658 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.658 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:34:09.659 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:34:09.659 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:34:09.659 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.660 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:34:09.660 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:34:09.660 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:34:09.661 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.661 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.662 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:34:09.662 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:34:09.662 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.663 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.663 Malloc1 00:34:09.663 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.664 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:34:09.664 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.664 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.665 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.665 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:34:09.665 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.666 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.666 [ 00:34:09.666 { 00:34:09.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:09.666 "subtype": "Discovery", 00:34:09.666 "listen_addresses": [], 00:34:09.666 "allow_any_host": true, 00:34:09.666 "hosts": [] 00:34:09.666 }, 00:34:09.666 { 00:34:09.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.667 "subtype": "NVMe", 00:34:09.667 "listen_addresses": [ 00:34:09.667 { 00:34:09.667 "trtype": "TCP", 00:34:09.667 "adrfam": "IPv4", 00:34:09.667 "traddr": "10.0.0.2", 00:34:09.667 "trsvcid": "4420" 00:34:09.667 } 00:34:09.667 ], 00:34:09.667 "allow_any_host": true, 00:34:09.667 "hosts": [], 00:34:09.668 "serial_number": "SPDK00000000000001", 00:34:09.668 "model_number": "SPDK bdev Controller", 00:34:09.668 "max_namespaces": 2, 00:34:09.668 "min_cntlid": 1, 00:34:09.668 "max_cntlid": 65519, 00:34:09.668 "namespaces": [ 00:34:09.668 { 00:34:09.668 "nsid": 1, 00:34:09.668 "bdev_name": "Malloc0", 00:34:09.668 "name": "Malloc0", 00:34:09.668 "nguid": "B2F71CCF8F87420D8497F9B8AF84DFBA", 00:34:09.669 "uuid": "b2f71ccf-8f87-420d-8497-f9b8af84dfba" 00:34:09.669 }, 00:34:09.669 { 00:34:09.669 "nsid": 2, 00:34:09.669 "bdev_name": "Malloc1", 00:34:09.669 "name": "Malloc1", 00:34:09.669 "nguid": "1CE08C1B519245C49552A03D9D974F7C", 00:34:09.669 "uuid": "1ce08c1b-5192-45c4-9552-a03d9d974f7c" 00:34:09.669 } 00:34:09.669 ] 00:34:09.669 } 00:34:09.669 ] 00:34:09.670 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.670 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2180432 00:34:09.670 Asynchronous Event Request test 00:34:09.670 Attaching to 10.0.0.2 00:34:09.670 Attached to 10.0.0.2 00:34:09.671 Registering asynchronous event callbacks... 00:34:09.671 Starting namespace attribute notice tests for all controllers... 00:34:09.671 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:09.671 aer_cb - Changed Namespace 00:34:09.671 Cleaning up... 00:34:09.672 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:09.672 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.672 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.673 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.673 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:34:09.673 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.674 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.674 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.674 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.675 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.675 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.675 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.676 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:34:09.676 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:34:09.676 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.677 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:34:09.677 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.677 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:34:09.677 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.678 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.678 rmmod nvme_tcp 00:34:09.678 rmmod nvme_fabrics 00:34:09.678 rmmod nvme_keyring 00:34:09.678 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.679 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:34:09.679 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:34:09.679 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2180289 ']' 00:34:09.680 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2180289 00:34:09.680 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2180289 ']' 00:34:09.680 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2180289 00:34:09.681 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:34:09.681 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.681 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180289 00:34:09.682 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.682 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.683 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180289' 00:34:09.683 killing process with pid 2180289 00:34:09.683 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2180289 00:34:09.683 10:43:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2180289 00:34:09.684 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.684 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.684 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.685 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:34:09.685 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:34:09.685 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.686 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.686 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.686 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.687 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.687 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.688 10:43:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.688 10:43:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.688 00:34:09.688 real 0m7.232s 00:34:09.688 user 0m5.248s 00:34:09.688 sys 0m3.199s 00:34:09.688 10:43:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.689 10:43:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.689 ************************************ 00:34:09.689 END TEST nvmf_aer 00:34:09.689 ************************************ 00:34:09.690 10:43:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:09.690 10:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.690 10:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.691 10:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.691 ************************************ 00:34:09.691 START TEST nvmf_async_init 00:34:09.691 ************************************ 00:34:09.692 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:09.692 * Looking for test storage... 00:34:09.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.692 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.693 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.693 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.694 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.694 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.694 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.695 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.695 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.695 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.696 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.696 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.696 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.697 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.697 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.697 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.698 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:34:09.698 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:34:09.698 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.699 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.699 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:34:09.699 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:34:09.700 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.700 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:34:09.700 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.701 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:34:09.701 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:34:09.701 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.701 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:34:09.702 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.702 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.702 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.702 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:34:09.703 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.703 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.704 --rc genhtml_branch_coverage=1 00:34:09.704 --rc genhtml_function_coverage=1 00:34:09.704 --rc genhtml_legend=1 00:34:09.704 --rc geninfo_all_blocks=1 00:34:09.704 --rc geninfo_unexecuted_blocks=1 00:34:09.704 00:34:09.704 ' 00:34:09.704 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.705 --rc genhtml_branch_coverage=1 00:34:09.705 --rc genhtml_function_coverage=1 00:34:09.705 --rc genhtml_legend=1 00:34:09.705 --rc geninfo_all_blocks=1 00:34:09.705 --rc geninfo_unexecuted_blocks=1 00:34:09.705 00:34:09.705 ' 00:34:09.706 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.706 --rc genhtml_branch_coverage=1 00:34:09.706 --rc genhtml_function_coverage=1 00:34:09.706 --rc genhtml_legend=1 00:34:09.706 --rc geninfo_all_blocks=1 00:34:09.706 --rc geninfo_unexecuted_blocks=1 00:34:09.706 00:34:09.706 ' 00:34:09.707 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.707 --rc genhtml_branch_coverage=1 00:34:09.707 --rc genhtml_function_coverage=1 00:34:09.707 --rc genhtml_legend=1 00:34:09.707 --rc geninfo_all_blocks=1 00:34:09.708 --rc geninfo_unexecuted_blocks=1 00:34:09.708 00:34:09.708 ' 00:34:09.708 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.708 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:34:09.709 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.709 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.709 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.710 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.710 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.710 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.711 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.711 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.711 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.711 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.712 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.712 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.713 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.713 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.713 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.714 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.714 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.714 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.715 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.715 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.716 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.718 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.720 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.722 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.722 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:34:09.725 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.725 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:34:09.725 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.726 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.726 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.726 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.727 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.727 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.728 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.728 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.728 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.729 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:34:09.729 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:34:09.729 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:34:09.730 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:34:09.730 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:34:09.730 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:34:09.731 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=acd25d636cda4d02943548e5eee23ed8 00:34:09.731 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:34:09.731 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.732 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.732 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.732 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.732 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.733 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.733 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.734 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.734 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.734 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.735 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.735 10:43:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.735 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.736 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.736 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.736 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.736 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.737 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.737 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.737 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.738 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.738 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:34:09.738 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.738 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:34:09.739 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.739 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:34:09.739 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.740 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.740 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.741 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.741 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.741 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.741 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.742 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.742 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.742 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.743 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.743 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.743 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.744 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.744 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.744 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.744 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.745 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.745 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.745 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.745 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:09.746 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:09.746 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.746 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.746 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.747 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.747 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.747 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.747 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:09.747 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:09.748 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.748 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.748 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.749 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.749 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.749 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.749 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.749 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.750 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.750 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.750 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.751 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.751 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.751 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.751 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.752 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:09.752 Found net devices under 0000:84:00.0: cvl_0_0 00:34:09.752 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.752 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.753 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.753 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.753 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.753 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.754 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.754 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.754 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:09.754 Found net devices under 0000:84:00.1: cvl_0_1 00:34:09.755 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.755 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.755 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.755 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.755 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.756 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.756 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.756 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.757 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.757 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.757 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.757 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.758 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.758 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.758 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.758 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.759 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.759 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.759 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.760 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.760 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.760 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.761 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.761 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.761 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.761 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.762 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.762 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.763 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:34:09.763 00:34:09.763 --- 10.0.0.2 ping statistics --- 00:34:09.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.763 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:34:09.764 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:34:09.764 00:34:09.764 --- 10.0.0.1 ping statistics --- 00:34:09.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.764 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:34:09.765 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.765 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:34:09.765 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:09.765 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.766 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:09.766 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:09.766 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.766 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:09.767 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:09.767 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:34:09.767 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:09.768 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:09.768 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.768 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2182523 00:34:09.769 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:34:09.769 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2182523 00:34:09.769 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2182523 ']' 00:34:09.769 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.770 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.770 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.771 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.771 10:43:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.771 [2024-12-09 10:43:33.783430] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:09.837 [2024-12-09 10:43:33.783539] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.837 [2024-12-09 10:43:33.925295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.837 [2024-12-09 10:43:34.042950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.838 [2024-12-09 10:43:34.043072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.838 [2024-12-09 10:43:34.043108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.839 [2024-12-09 10:43:34.043138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.839 [2024-12-09 10:43:34.043164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.840 [2024-12-09 10:43:34.044546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.840 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.840 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:34:09.841 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:09.841 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:09.841 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.842 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.843 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:09.843 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.843 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.844 [2024-12-09 10:43:34.407295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.844 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.845 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:34:09.845 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.846 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.846 null0 00:34:09.846 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.846 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:34:09.847 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.847 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.847 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.847 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:34:09.848 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.848 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.848 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.849 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g acd25d636cda4d02943548e5eee23ed8 00:34:09.849 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.849 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.850 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.850 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.850 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.851 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.851 [2024-12-09 10:43:34.448028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.851 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.852 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:34:09.852 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.852 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.852 nvme0n1 00:34:09.852 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.853 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:09.853 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.853 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.853 [ 00:34:09.853 { 00:34:09.853 "name": "nvme0n1", 00:34:09.853 "aliases": [ 00:34:09.853 "acd25d63-6cda-4d02-9435-48e5eee23ed8" 00:34:09.853 ], 00:34:09.854 "product_name": "NVMe disk", 00:34:09.854 "block_size": 512, 00:34:09.854 "num_blocks": 2097152, 00:34:09.854 "uuid": "acd25d63-6cda-4d02-9435-48e5eee23ed8", 00:34:09.854 "numa_id": 1, 00:34:09.854 "assigned_rate_limits": { 00:34:09.854 "rw_ios_per_sec": 0, 00:34:09.854 "rw_mbytes_per_sec": 0, 00:34:09.854 "r_mbytes_per_sec": 0, 00:34:09.854 "w_mbytes_per_sec": 0 00:34:09.854 }, 00:34:09.854 "claimed": false, 00:34:09.854 "zoned": false, 00:34:09.854 "supported_io_types": { 00:34:09.854 "read": true, 00:34:09.854 "write": true, 00:34:09.855 "unmap": false, 00:34:09.855 "flush": true, 00:34:09.855 "reset": true, 00:34:09.855 "nvme_admin": true, 00:34:09.855 "nvme_io": true, 00:34:09.855 "nvme_io_md": false, 00:34:09.855 "write_zeroes": true, 00:34:09.855 "zcopy": false, 00:34:09.855 "get_zone_info": false, 00:34:09.855 "zone_management": false, 00:34:09.855 "zone_append": false, 00:34:09.855 "compare": true, 00:34:09.855 "compare_and_write": true, 00:34:09.855 "abort": true, 00:34:09.855 "seek_hole": false, 00:34:09.855 "seek_data": false, 00:34:09.855 "copy": true, 00:34:09.855 "nvme_iov_md": false 00:34:09.855 }, 00:34:09.856 "memory_domains": [ 00:34:09.856 { 00:34:09.856 "dma_device_id": "system", 00:34:09.856 "dma_device_type": 1 00:34:09.856 } 00:34:09.856 ], 00:34:09.856 "driver_specific": { 00:34:09.856 "nvme": [ 00:34:09.856 { 00:34:09.856 "trid": { 00:34:09.856 "trtype": "TCP", 00:34:09.856 "adrfam": "IPv4", 00:34:09.856 "traddr": "10.0.0.2", 00:34:09.856 "trsvcid": "4420", 00:34:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:09.856 }, 00:34:09.856 "ctrlr_data": { 00:34:09.856 "cntlid": 1, 00:34:09.856 "vendor_id": "0x8086", 00:34:09.857 "model_number": "SPDK bdev Controller", 00:34:09.857 "serial_number": "00000000000000000000", 00:34:09.857 "firmware_revision": "25.01", 00:34:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.857 "oacs": { 00:34:09.857 "security": 0, 00:34:09.857 "format": 0, 00:34:09.857 "firmware": 0, 00:34:09.857 "ns_manage": 0 00:34:09.857 }, 00:34:09.857 "multi_ctrlr": true, 00:34:09.857 "ana_reporting": false 00:34:09.857 }, 00:34:09.857 "vs": { 00:34:09.858 "nvme_version": "1.3" 00:34:09.858 }, 00:34:09.858 "ns_data": { 00:34:09.858 "id": 1, 00:34:09.858 "can_share": true 00:34:09.858 } 00:34:09.858 } 00:34:09.858 ], 00:34:09.858 "mp_policy": "active_passive" 00:34:09.858 } 00:34:09.858 } 00:34:09.858 ] 00:34:09.858 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.859 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:09.859 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.859 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.860 [2024-12-09 10:43:34.712240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.860 [2024-12-09 10:43:34.712459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7e40 (9): Bad file descriptor 00:34:09.861 [2024-12-09 10:43:34.855069] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:34:09.861 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.861 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:09.862 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.862 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.862 [ 00:34:09.862 { 00:34:09.862 "name": "nvme0n1", 00:34:09.862 "aliases": [ 00:34:09.862 "acd25d63-6cda-4d02-9435-48e5eee23ed8" 00:34:09.862 ], 00:34:09.862 "product_name": "NVMe disk", 00:34:09.862 "block_size": 512, 00:34:09.862 "num_blocks": 2097152, 00:34:09.863 "uuid": "acd25d63-6cda-4d02-9435-48e5eee23ed8", 00:34:09.863 "numa_id": 1, 00:34:09.863 "assigned_rate_limits": { 00:34:09.863 "rw_ios_per_sec": 0, 00:34:09.863 "rw_mbytes_per_sec": 0, 00:34:09.863 "r_mbytes_per_sec": 0, 00:34:09.863 "w_mbytes_per_sec": 0 00:34:09.863 }, 00:34:09.863 "claimed": false, 00:34:09.863 "zoned": false, 00:34:09.863 "supported_io_types": { 00:34:09.863 "read": true, 00:34:09.863 "write": true, 00:34:09.864 "unmap": false, 00:34:09.864 "flush": true, 00:34:09.864 "reset": true, 00:34:09.864 "nvme_admin": true, 00:34:09.864 "nvme_io": true, 00:34:09.864 "nvme_io_md": false, 00:34:09.864 "write_zeroes": true, 00:34:09.864 "zcopy": false, 00:34:09.864 "get_zone_info": false, 00:34:09.864 "zone_management": false, 00:34:09.864 "zone_append": false, 00:34:09.864 "compare": true, 00:34:09.864 "compare_and_write": true, 00:34:09.865 "abort": true, 00:34:09.865 "seek_hole": false, 00:34:09.865 "seek_data": false, 00:34:09.865 "copy": true, 00:34:09.865 "nvme_iov_md": false 00:34:09.865 }, 00:34:09.865 "memory_domains": [ 00:34:09.865 { 00:34:09.865 "dma_device_id": "system", 00:34:09.866 "dma_device_type": 1 00:34:09.866 } 00:34:09.866 ], 00:34:09.866 "driver_specific": { 00:34:09.866 "nvme": [ 00:34:09.866 { 00:34:09.866 "trid": { 00:34:09.867 "trtype": "TCP", 00:34:09.867 "adrfam": "IPv4", 00:34:09.867 "traddr": "10.0.0.2", 00:34:09.867 "trsvcid": "4420", 00:34:09.867 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:09.867 }, 00:34:09.867 "ctrlr_data": { 00:34:09.867 "cntlid": 2, 00:34:09.867 "vendor_id": "0x8086", 00:34:09.867 "model_number": "SPDK bdev Controller", 00:34:09.867 "serial_number": "00000000000000000000", 00:34:09.867 "firmware_revision": "25.01", 00:34:09.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.868 "oacs": { 00:34:09.868 "security": 0, 00:34:09.868 "format": 0, 00:34:09.868 "firmware": 0, 00:34:09.868 "ns_manage": 0 00:34:09.868 }, 00:34:09.868 "multi_ctrlr": true, 00:34:09.868 "ana_reporting": false 00:34:09.868 }, 00:34:09.868 "vs": { 00:34:09.868 "nvme_version": "1.3" 00:34:09.868 }, 00:34:09.868 "ns_data": { 00:34:09.868 "id": 1, 00:34:09.868 "can_share": true 00:34:09.868 } 00:34:09.868 } 00:34:09.868 ], 00:34:09.868 "mp_policy": "active_passive" 00:34:09.868 } 00:34:09.868 } 00:34:09.868 ] 00:34:09.869 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.869 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.869 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.870 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.870 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.873 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:34:09.873 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZnNgwyfBgE 00:34:09.874 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:09.874 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZnNgwyfBgE 00:34:09.875 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ZnNgwyfBgE 00:34:09.875 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.875 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.876 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.876 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:34:09.876 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.877 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.877 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.878 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:34:09.878 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.878 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.881 [2024-12-09 10:43:34.929149] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:09.881 [2024-12-09 10:43:34.929431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:09.882 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.882 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:34:09.883 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.883 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.883 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.884 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:34:09.884 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.885 10:43:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.885 [2024-12-09 10:43:34.945350] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:09.885 nvme0n1 00:34:09.885 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.886 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:09.886 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.886 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.886 [ 00:34:09.886 { 00:34:09.886 "name": "nvme0n1", 00:34:09.886 "aliases": [ 00:34:09.887 "acd25d63-6cda-4d02-9435-48e5eee23ed8" 00:34:09.887 ], 00:34:09.887 "product_name": "NVMe disk", 00:34:09.887 "block_size": 512, 00:34:09.887 "num_blocks": 2097152, 00:34:09.887 "uuid": "acd25d63-6cda-4d02-9435-48e5eee23ed8", 00:34:09.887 "numa_id": 1, 00:34:09.887 "assigned_rate_limits": { 00:34:09.887 "rw_ios_per_sec": 0, 00:34:09.887 "rw_mbytes_per_sec": 0, 00:34:09.888 "r_mbytes_per_sec": 0, 00:34:09.888 "w_mbytes_per_sec": 0 00:34:09.888 }, 00:34:09.888 "claimed": false, 00:34:09.888 "zoned": false, 00:34:09.888 "supported_io_types": { 00:34:09.888 "read": true, 00:34:09.888 "write": true, 00:34:09.888 "unmap": false, 00:34:09.888 "flush": true, 00:34:09.888 "reset": true, 00:34:09.888 "nvme_admin": true, 00:34:09.888 "nvme_io": true, 00:34:09.888 "nvme_io_md": false, 00:34:09.889 "write_zeroes": true, 00:34:09.889 "zcopy": false, 00:34:09.889 "get_zone_info": false, 00:34:09.889 "zone_management": false, 00:34:09.889 "zone_append": false, 00:34:09.889 "compare": true, 00:34:09.889 "compare_and_write": true, 00:34:09.889 "abort": true, 00:34:09.889 "seek_hole": false, 00:34:09.889 "seek_data": false, 00:34:09.889 "copy": true, 00:34:09.889 "nvme_iov_md": false 00:34:09.889 }, 00:34:09.889 "memory_domains": [ 00:34:09.889 { 00:34:09.890 "dma_device_id": "system", 00:34:09.890 "dma_device_type": 1 00:34:09.890 } 00:34:09.890 ], 00:34:09.890 "driver_specific": { 00:34:09.890 "nvme": [ 00:34:09.890 { 00:34:09.890 "trid": { 00:34:09.890 "trtype": "TCP", 00:34:09.890 "adrfam": "IPv4", 00:34:09.890 "traddr": "10.0.0.2", 00:34:09.890 "trsvcid": "4421", 00:34:09.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:09.890 }, 00:34:09.890 "ctrlr_data": { 00:34:09.890 "cntlid": 3, 00:34:09.891 "vendor_id": "0x8086", 00:34:09.891 "model_number": "SPDK bdev Controller", 00:34:09.891 "serial_number": "00000000000000000000", 00:34:09.891 "firmware_revision": "25.01", 00:34:09.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.891 "oacs": { 00:34:09.891 "security": 0, 00:34:09.891 "format": 0, 00:34:09.891 "firmware": 0, 00:34:09.891 "ns_manage": 0 00:34:09.891 }, 00:34:09.892 "multi_ctrlr": true, 00:34:09.892 "ana_reporting": false 00:34:09.892 }, 00:34:09.892 "vs": { 00:34:09.892 "nvme_version": "1.3" 00:34:09.892 }, 00:34:09.892 "ns_data": { 00:34:09.892 "id": 1, 00:34:09.892 "can_share": true 00:34:09.892 } 00:34:09.892 } 00:34:09.892 ], 00:34:09.892 "mp_policy": "active_passive" 00:34:09.892 } 00:34:09.892 } 00:34:09.892 ] 00:34:09.893 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.893 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.893 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.894 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.894 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.895 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ZnNgwyfBgE 00:34:09.895 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:34:09.895 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:34:09.896 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.896 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:34:09.896 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.897 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:34:09.897 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.897 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.897 rmmod nvme_tcp 00:34:09.897 rmmod nvme_fabrics 00:34:09.897 rmmod nvme_keyring 00:34:09.898 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.898 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:34:09.898 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:34:09.899 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2182523 ']' 00:34:09.899 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2182523 00:34:09.900 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2182523 ']' 00:34:09.900 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2182523 00:34:09.900 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:34:09.901 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.901 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182523 00:34:09.902 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.902 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.902 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182523' 00:34:09.903 killing process with pid 2182523 00:34:09.903 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2182523 00:34:09.903 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2182523 00:34:09.904 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.904 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.904 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.905 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:34:09.905 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:34:09.905 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.906 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.906 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.907 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.907 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.908 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.908 10:43:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.908 10:43:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.908 00:34:09.908 real 0m7.420s 00:34:09.908 user 0m3.301s 00:34:09.909 sys 0m3.025s 00:34:09.909 10:43:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.909 10:43:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:09.909 ************************************ 00:34:09.909 END TEST nvmf_async_init 00:34:09.910 ************************************ 00:34:09.910 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:09.911 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.911 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.911 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.911 ************************************ 00:34:09.911 START TEST dma 00:34:09.912 ************************************ 00:34:09.912 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:09.912 * Looking for test storage... 00:34:09.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.913 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.913 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.914 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.914 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.914 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.915 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.915 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.915 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.915 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.916 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.916 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.916 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.917 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.917 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.917 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.918 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:34:09.918 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:34:09.919 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.919 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.919 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:34:09.920 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:34:09.920 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.920 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:34:09.920 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.921 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:34:09.922 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.923 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.923 --rc genhtml_branch_coverage=1 00:34:09.924 --rc genhtml_function_coverage=1 00:34:09.924 --rc genhtml_legend=1 00:34:09.924 --rc geninfo_all_blocks=1 00:34:09.924 --rc geninfo_unexecuted_blocks=1 00:34:09.924 00:34:09.924 ' 00:34:09.924 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.924 --rc genhtml_branch_coverage=1 00:34:09.924 --rc genhtml_function_coverage=1 00:34:09.924 --rc genhtml_legend=1 00:34:09.924 --rc geninfo_all_blocks=1 00:34:09.924 --rc geninfo_unexecuted_blocks=1 00:34:09.924 00:34:09.924 ' 00:34:09.924 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.924 --rc genhtml_branch_coverage=1 00:34:09.924 --rc genhtml_function_coverage=1 00:34:09.925 --rc genhtml_legend=1 00:34:09.925 --rc geninfo_all_blocks=1 00:34:09.925 --rc geninfo_unexecuted_blocks=1 00:34:09.925 00:34:09.925 ' 00:34:09.925 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.925 --rc genhtml_branch_coverage=1 00:34:09.925 --rc genhtml_function_coverage=1 00:34:09.925 --rc genhtml_legend=1 00:34:09.925 --rc geninfo_all_blocks=1 00:34:09.925 --rc geninfo_unexecuted_blocks=1 00:34:09.925 00:34:09.925 ' 00:34:09.925 10:43:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.925 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.926 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.927 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.927 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.927 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.927 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.928 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.929 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.929 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.929 10:43:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.934 10:43:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.936 10:43:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.937 10:43:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.937 10:43:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:34:09.939 10:43:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.939 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:34:09.940 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.940 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.940 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.940 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.941 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.941 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.941 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.942 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.942 10:43:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.942 10:43:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:34:09.942 10:43:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:34:09.942 00:34:09.942 real 0m0.248s 00:34:09.942 user 0m0.150s 00:34:09.942 sys 0m0.113s 00:34:09.943 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.943 10:43:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:34:09.943 ************************************ 00:34:09.943 END TEST dma 00:34:09.943 ************************************ 00:34:09.944 10:43:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:09.944 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.944 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.944 10:43:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 ************************************ 00:34:09.944 START TEST nvmf_identify 00:34:09.945 ************************************ 00:34:09.945 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:09.945 * Looking for test storage... 00:34:09.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.946 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.946 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.946 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.946 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.947 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.947 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.947 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.947 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.947 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.948 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.948 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.948 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.948 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.948 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.949 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.949 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:34:09.949 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:34:09.949 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.949 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.950 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:34:09.950 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:34:09.950 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.950 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:34:09.951 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.951 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:34:09.951 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:34:09.951 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.952 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:34:09.952 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.952 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.953 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.953 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:34:09.953 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.953 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.954 --rc genhtml_branch_coverage=1 00:34:09.954 --rc genhtml_function_coverage=1 00:34:09.954 --rc genhtml_legend=1 00:34:09.954 --rc geninfo_all_blocks=1 00:34:09.954 --rc geninfo_unexecuted_blocks=1 00:34:09.954 00:34:09.954 ' 00:34:09.954 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.954 --rc genhtml_branch_coverage=1 00:34:09.955 --rc genhtml_function_coverage=1 00:34:09.955 --rc genhtml_legend=1 00:34:09.955 --rc geninfo_all_blocks=1 00:34:09.955 --rc geninfo_unexecuted_blocks=1 00:34:09.955 00:34:09.955 ' 00:34:09.955 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.955 --rc genhtml_branch_coverage=1 00:34:09.956 --rc genhtml_function_coverage=1 00:34:09.956 --rc genhtml_legend=1 00:34:09.956 --rc geninfo_all_blocks=1 00:34:09.956 --rc geninfo_unexecuted_blocks=1 00:34:09.956 00:34:09.956 ' 00:34:09.956 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.956 --rc genhtml_branch_coverage=1 00:34:09.956 --rc genhtml_function_coverage=1 00:34:09.957 --rc genhtml_legend=1 00:34:09.957 --rc geninfo_all_blocks=1 00:34:09.957 --rc geninfo_unexecuted_blocks=1 00:34:09.957 00:34:09.957 ' 00:34:09.957 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.958 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:34:09.959 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.959 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.959 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.959 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.960 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.960 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.960 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.960 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.961 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.961 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.961 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.962 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:09.962 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.962 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.963 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.963 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.963 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.964 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.964 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.964 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.965 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.966 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.968 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.970 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.970 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:34:09.972 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.973 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:34:09.973 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.973 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.973 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.974 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.974 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.974 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.975 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.975 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.975 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.975 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:09.976 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:09.976 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:34:09.976 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.976 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.977 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.977 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.977 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.977 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.978 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.978 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.978 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.978 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.979 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.979 10:43:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:09.979 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.979 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.980 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.980 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.980 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.980 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.981 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.981 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.981 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.981 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:34:09.981 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.982 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:34:09.982 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.982 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:34:09.983 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.983 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.983 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.984 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.984 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.984 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.985 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.985 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.986 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.986 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.986 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.987 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.987 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.987 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.988 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.988 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.988 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.988 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.989 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.989 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.989 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:09.990 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:09.990 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.990 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.990 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.991 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.991 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.991 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.992 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:09.992 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:09.992 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.993 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.993 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.993 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.994 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.994 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.994 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.994 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.995 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.995 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.995 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.996 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.996 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.996 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.997 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.997 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:09.997 Found net devices under 0000:84:00.0: cvl_0_0 00:34:09.998 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.998 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.998 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.999 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.999 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.999 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.000 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.000 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.000 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:10.001 Found net devices under 0000:84:00.1: cvl_0_1 00:34:10.001 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.001 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.001 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.002 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.002 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.002 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.003 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.003 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.003 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.004 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.004 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.004 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.005 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.005 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.005 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.006 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.006 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.007 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.007 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.007 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.008 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.008 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.008 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.009 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.009 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.010 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.010 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.011 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.011 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:34:10.012 00:34:10.012 --- 10.0.0.2 ping statistics --- 00:34:10.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.012 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:10.013 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:34:10.013 00:34:10.013 --- 10.0.0.1 ping statistics --- 00:34:10.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.013 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:10.014 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.014 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:34:10.014 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.015 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.015 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.015 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.016 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.016 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.016 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.017 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:34:10.017 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.017 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.018 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2184805 00:34:10.018 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:10.019 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.019 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2184805 00:34:10.020 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2184805 ']' 00:34:10.020 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.020 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.021 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.022 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.022 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.022 [2024-12-09 10:43:41.375556] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.023 [2024-12-09 10:43:41.375673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.023 [2024-12-09 10:43:41.507712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.024 [2024-12-09 10:43:41.620890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.024 [2024-12-09 10:43:41.620953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.025 [2024-12-09 10:43:41.620979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.025 [2024-12-09 10:43:41.621007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.026 [2024-12-09 10:43:41.621024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.026 [2024-12-09 10:43:41.626776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.026 [2024-12-09 10:43:41.626819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.026 [2024-12-09 10:43:41.626869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.027 [2024-12-09 10:43:41.626873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.027 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.027 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:34:10.028 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:10.028 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.028 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.029 [2024-12-09 10:43:41.757834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.029 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.029 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:34:10.030 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.030 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.030 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:10.031 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.031 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.031 Malloc0 00:34:10.031 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.032 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.032 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.032 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.032 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.033 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:34:10.033 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.034 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.034 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.034 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.035 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.035 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.035 [2024-12-09 10:43:41.857188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.036 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.036 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.036 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.037 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.037 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.037 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:34:10.038 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.038 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.038 [ 00:34:10.038 { 00:34:10.038 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:10.038 "subtype": "Discovery", 00:34:10.038 "listen_addresses": [ 00:34:10.038 { 00:34:10.038 "trtype": "TCP", 00:34:10.038 "adrfam": "IPv4", 00:34:10.038 "traddr": "10.0.0.2", 00:34:10.039 "trsvcid": "4420" 00:34:10.039 } 00:34:10.039 ], 00:34:10.039 "allow_any_host": true, 00:34:10.039 "hosts": [] 00:34:10.039 }, 00:34:10.039 { 00:34:10.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.039 "subtype": "NVMe", 00:34:10.039 "listen_addresses": [ 00:34:10.039 { 00:34:10.039 "trtype": "TCP", 00:34:10.039 "adrfam": "IPv4", 00:34:10.039 "traddr": "10.0.0.2", 00:34:10.039 "trsvcid": "4420" 00:34:10.039 } 00:34:10.039 ], 00:34:10.039 "allow_any_host": true, 00:34:10.039 "hosts": [], 00:34:10.039 "serial_number": "SPDK00000000000001", 00:34:10.040 "model_number": "SPDK bdev Controller", 00:34:10.040 "max_namespaces": 32, 00:34:10.040 "min_cntlid": 1, 00:34:10.040 "max_cntlid": 65519, 00:34:10.040 "namespaces": [ 00:34:10.040 { 00:34:10.040 "nsid": 1, 00:34:10.040 "bdev_name": "Malloc0", 00:34:10.040 "name": "Malloc0", 00:34:10.040 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:34:10.040 "eui64": "ABCDEF0123456789", 00:34:10.040 "uuid": "c0cf6cb7-c9a9-49e5-b57c-08310c2d0a13" 00:34:10.040 } 00:34:10.040 ] 00:34:10.040 } 00:34:10.040 ] 00:34:10.041 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.042 10:43:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:34:10.042 [2024-12-09 10:43:41.914218] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.045 [2024-12-09 10:43:41.914315] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184957 ] 00:34:10.046 [2024-12-09 10:43:41.982381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:34:10.046 [2024-12-09 10:43:41.982457] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:10.046 [2024-12-09 10:43:41.982468] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:10.047 [2024-12-09 10:43:41.982492] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:10.047 [2024-12-09 10:43:41.982508] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:10.048 [2024-12-09 10:43:41.986259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:34:10.048 [2024-12-09 10:43:41.986318] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1044690 0 00:34:10.052 [2024-12-09 10:43:41.986517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:10.053 [2024-12-09 10:43:41.986537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:10.053 [2024-12-09 10:43:41.986550] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:10.053 [2024-12-09 10:43:41.986557] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:10.054 [2024-12-09 10:43:41.986608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.054 [2024-12-09 10:43:41.986621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.054 [2024-12-09 10:43:41.986630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.055 [2024-12-09 10:43:41.986650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:10.055 [2024-12-09 10:43:41.986677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.055 [2024-12-09 10:43:41.992734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.056 [2024-12-09 10:43:41.992754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.056 [2024-12-09 10:43:41.992762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.068 [2024-12-09 10:43:41.992769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.068 [2024-12-09 10:43:41.992793] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:10.068 [2024-12-09 10:43:41.992807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:34:10.069 [2024-12-09 10:43:41.992817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:34:10.069 [2024-12-09 10:43:41.992843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.069 [2024-12-09 10:43:41.992853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.070 [2024-12-09 10:43:41.992860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.070 [2024-12-09 10:43:41.992871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.071 [2024-12-09 10:43:41.992896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.071 [2024-12-09 10:43:41.993046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.071 [2024-12-09 10:43:41.993058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.072 [2024-12-09 10:43:41.993065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.072 [2024-12-09 10:43:41.993072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.072 [2024-12-09 10:43:41.993086] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:34:10.073 [2024-12-09 10:43:41.993100] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:34:10.073 [2024-12-09 10:43:41.993113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.074 [2024-12-09 10:43:41.993120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.074 [2024-12-09 10:43:41.993126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.074 [2024-12-09 10:43:41.993137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.075 [2024-12-09 10:43:41.993158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.075 [2024-12-09 10:43:41.993255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.075 [2024-12-09 10:43:41.993268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.076 [2024-12-09 10:43:41.993275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.076 [2024-12-09 10:43:41.993281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.077 [2024-12-09 10:43:41.993290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:34:10.077 [2024-12-09 10:43:41.993305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:34:10.077 [2024-12-09 10:43:41.993322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.078 [2024-12-09 10:43:41.993330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.078 [2024-12-09 10:43:41.993336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.079 [2024-12-09 10:43:41.993347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.079 [2024-12-09 10:43:41.993368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.079 [2024-12-09 10:43:41.993446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.079 [2024-12-09 10:43:41.993460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.080 [2024-12-09 10:43:41.993466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.080 [2024-12-09 10:43:41.993473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.081 [2024-12-09 10:43:41.993482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:10.081 [2024-12-09 10:43:41.993499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.081 [2024-12-09 10:43:41.993508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.082 [2024-12-09 10:43:41.993514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.082 [2024-12-09 10:43:41.993524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.083 [2024-12-09 10:43:41.993546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.083 [2024-12-09 10:43:41.993627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.083 [2024-12-09 10:43:41.993640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.083 [2024-12-09 10:43:41.993646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.084 [2024-12-09 10:43:41.993653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.084 [2024-12-09 10:43:41.993661] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:34:10.085 [2024-12-09 10:43:41.993670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:34:10.085 [2024-12-09 10:43:41.993683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:10.086 [2024-12-09 10:43:41.993794] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:34:10.086 [2024-12-09 10:43:41.993806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:10.087 [2024-12-09 10:43:41.993823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.087 [2024-12-09 10:43:41.993831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.087 [2024-12-09 10:43:41.993837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.088 [2024-12-09 10:43:41.993848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.088 [2024-12-09 10:43:41.993871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.089 [2024-12-09 10:43:41.994000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.089 [2024-12-09 10:43:41.994014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.089 [2024-12-09 10:43:41.994021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.090 [2024-12-09 10:43:41.994027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.090 [2024-12-09 10:43:41.994057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:10.090 [2024-12-09 10:43:41.994076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.091 [2024-12-09 10:43:41.994085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.091 [2024-12-09 10:43:41.994092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.091 [2024-12-09 10:43:41.994102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.092 [2024-12-09 10:43:41.994123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.092 [2024-12-09 10:43:41.994201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.092 [2024-12-09 10:43:41.994213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.093 [2024-12-09 10:43:41.994220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.093 [2024-12-09 10:43:41.994226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.094 [2024-12-09 10:43:41.994233] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:10.094 [2024-12-09 10:43:41.994241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:34:10.095 [2024-12-09 10:43:41.994255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:34:10.095 [2024-12-09 10:43:41.994268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:34:10.096 [2024-12-09 10:43:41.994285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.096 [2024-12-09 10:43:41.994293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.096 [2024-12-09 10:43:41.994304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.097 [2024-12-09 10:43:41.994325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.097 [2024-12-09 10:43:41.994457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.097 [2024-12-09 10:43:41.994471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.097 [2024-12-09 10:43:41.994478] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.098 [2024-12-09 10:43:41.994485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1044690): datao=0, datal=4096, cccid=0 00:34:10.098 [2024-12-09 10:43:41.994492] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a6100) on tqpair(0x1044690): expected_datao=0, payload_size=4096 00:34:10.098 [2024-12-09 10:43:41.994500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.099 [2024-12-09 10:43:41.994518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.099 [2024-12-09 10:43:41.994528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.099 [2024-12-09 10:43:42.039752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.099 [2024-12-09 10:43:42.039771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.099 [2024-12-09 10:43:42.039779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.100 [2024-12-09 10:43:42.039786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.100 [2024-12-09 10:43:42.039807] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:34:10.101 [2024-12-09 10:43:42.039817] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:34:10.101 [2024-12-09 10:43:42.039825] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:34:10.101 [2024-12-09 10:43:42.039838] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:34:10.102 [2024-12-09 10:43:42.039848] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:34:10.102 [2024-12-09 10:43:42.039857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:34:10.102 [2024-12-09 10:43:42.039873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:34:10.103 [2024-12-09 10:43:42.039886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.103 [2024-12-09 10:43:42.039894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.103 [2024-12-09 10:43:42.039901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.104 [2024-12-09 10:43:42.039913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.104 [2024-12-09 10:43:42.039949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.104 [2024-12-09 10:43:42.040098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.104 [2024-12-09 10:43:42.040113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.105 [2024-12-09 10:43:42.040120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.105 [2024-12-09 10:43:42.040126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.105 [2024-12-09 10:43:42.040139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.105 [2024-12-09 10:43:42.040147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.106 [2024-12-09 10:43:42.040153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1044690) 00:34:10.106 [2024-12-09 10:43:42.040163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.106 [2024-12-09 10:43:42.040173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.107 [2024-12-09 10:43:42.040180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.107 [2024-12-09 10:43:42.040186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1044690) 00:34:10.107 [2024-12-09 10:43:42.040195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.107 [2024-12-09 10:43:42.040204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.108 [2024-12-09 10:43:42.040211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.108 [2024-12-09 10:43:42.040218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1044690) 00:34:10.108 [2024-12-09 10:43:42.040226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.109 [2024-12-09 10:43:42.040236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.109 [2024-12-09 10:43:42.040242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.109 [2024-12-09 10:43:42.040248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.110 [2024-12-09 10:43:42.040257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.110 [2024-12-09 10:43:42.040266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:34:10.111 [2024-12-09 10:43:42.040286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:10.111 [2024-12-09 10:43:42.040299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.111 [2024-12-09 10:43:42.040310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1044690) 00:34:10.112 [2024-12-09 10:43:42.040321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.112 [2024-12-09 10:43:42.040356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6100, cid 0, qid 0 00:34:10.112 [2024-12-09 10:43:42.040367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6280, cid 1, qid 0 00:34:10.112 [2024-12-09 10:43:42.040374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6400, cid 2, qid 0 00:34:10.113 [2024-12-09 10:43:42.040382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.113 [2024-12-09 10:43:42.040389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6700, cid 4, qid 0 00:34:10.113 [2024-12-09 10:43:42.040552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.113 [2024-12-09 10:43:42.040564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.114 [2024-12-09 10:43:42.040571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.114 [2024-12-09 10:43:42.040578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6700) on tqpair=0x1044690 00:34:10.114 [2024-12-09 10:43:42.040588] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:34:10.115 [2024-12-09 10:43:42.040596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:34:10.115 [2024-12-09 10:43:42.040614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.115 [2024-12-09 10:43:42.040623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1044690) 00:34:10.116 [2024-12-09 10:43:42.040634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.116 [2024-12-09 10:43:42.040655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6700, cid 4, qid 0 00:34:10.116 [2024-12-09 10:43:42.040813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.117 [2024-12-09 10:43:42.040829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.117 [2024-12-09 10:43:42.040837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.117 [2024-12-09 10:43:42.040843] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1044690): datao=0, datal=4096, cccid=4 00:34:10.118 [2024-12-09 10:43:42.040851] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a6700) on tqpair(0x1044690): expected_datao=0, payload_size=4096 00:34:10.118 [2024-12-09 10:43:42.040858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.118 [2024-12-09 10:43:42.040869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.118 [2024-12-09 10:43:42.040876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.118 [2024-12-09 10:43:42.040899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.119 [2024-12-09 10:43:42.040909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.119 [2024-12-09 10:43:42.040916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.119 [2024-12-09 10:43:42.040923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6700) on tqpair=0x1044690 00:34:10.120 [2024-12-09 10:43:42.040944] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:34:10.120 [2024-12-09 10:43:42.040984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.120 [2024-12-09 10:43:42.040995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1044690) 00:34:10.121 [2024-12-09 10:43:42.041006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.121 [2024-12-09 10:43:42.041018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.121 [2024-12-09 10:43:42.041044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.121 [2024-12-09 10:43:42.041051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1044690) 00:34:10.122 [2024-12-09 10:43:42.041061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.122 [2024-12-09 10:43:42.041088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6700, cid 4, qid 0 00:34:10.122 [2024-12-09 10:43:42.041100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6880, cid 5, qid 0 00:34:10.123 [2024-12-09 10:43:42.041285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.123 [2024-12-09 10:43:42.041299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.123 [2024-12-09 10:43:42.041306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.123 [2024-12-09 10:43:42.041312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1044690): datao=0, datal=1024, cccid=4 00:34:10.124 [2024-12-09 10:43:42.041320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a6700) on tqpair(0x1044690): expected_datao=0, payload_size=1024 00:34:10.124 [2024-12-09 10:43:42.041327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.124 [2024-12-09 10:43:42.041336] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.124 [2024-12-09 10:43:42.041343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.125 [2024-12-09 10:43:42.041351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.125 [2024-12-09 10:43:42.041360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.125 [2024-12-09 10:43:42.041366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.125 [2024-12-09 10:43:42.041373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6880) on tqpair=0x1044690 00:34:10.126 [2024-12-09 10:43:42.081835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.126 [2024-12-09 10:43:42.081853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.126 [2024-12-09 10:43:42.081861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.127 [2024-12-09 10:43:42.081868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6700) on tqpair=0x1044690 00:34:10.127 [2024-12-09 10:43:42.081887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.127 [2024-12-09 10:43:42.081896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1044690) 00:34:10.128 [2024-12-09 10:43:42.081908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.128 [2024-12-09 10:43:42.081938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6700, cid 4, qid 0 00:34:10.128 [2024-12-09 10:43:42.082065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.128 [2024-12-09 10:43:42.082078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.129 [2024-12-09 10:43:42.082084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.129 [2024-12-09 10:43:42.082090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1044690): datao=0, datal=3072, cccid=4 00:34:10.130 [2024-12-09 10:43:42.082098] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a6700) on tqpair(0x1044690): expected_datao=0, payload_size=3072 00:34:10.130 [2024-12-09 10:43:42.082105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.130 [2024-12-09 10:43:42.082115] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.130 [2024-12-09 10:43:42.082122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.131 [2024-12-09 10:43:42.082134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.131 [2024-12-09 10:43:42.082143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.131 [2024-12-09 10:43:42.082150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.132 [2024-12-09 10:43:42.082156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6700) on tqpair=0x1044690 00:34:10.132 [2024-12-09 10:43:42.082176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.132 [2024-12-09 10:43:42.082186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1044690) 00:34:10.133 [2024-12-09 10:43:42.082196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.133 [2024-12-09 10:43:42.082224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6700, cid 4, qid 0 00:34:10.133 [2024-12-09 10:43:42.085748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.133 [2024-12-09 10:43:42.085765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.134 [2024-12-09 10:43:42.085772] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.134 [2024-12-09 10:43:42.085779] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1044690): datao=0, datal=8, cccid=4 00:34:10.134 [2024-12-09 10:43:42.085786] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10a6700) on tqpair(0x1044690): expected_datao=0, payload_size=8 00:34:10.135 [2024-12-09 10:43:42.085794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.135 [2024-12-09 10:43:42.085804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.135 [2024-12-09 10:43:42.085811] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.135 [2024-12-09 10:43:42.122857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.135 [2024-12-09 10:43:42.122875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.136 [2024-12-09 10:43:42.122883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.136 [2024-12-09 10:43:42.122890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6700) on tqpair=0x1044690 00:34:10.136 ===================================================== 00:34:10.136 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:10.136 ===================================================== 00:34:10.136 Controller Capabilities/Features 00:34:10.136 ================================ 00:34:10.137 Vendor ID: 0000 00:34:10.137 Subsystem Vendor ID: 0000 00:34:10.137 Serial Number: .................... 00:34:10.137 Model Number: ........................................ 00:34:10.137 Firmware Version: 25.01 00:34:10.137 Recommended Arb Burst: 0 00:34:10.137 IEEE OUI Identifier: 00 00 00 00:34:10.137 Multi-path I/O 00:34:10.138 May have multiple subsystem ports: No 00:34:10.138 May have multiple controllers: No 00:34:10.138 Associated with SR-IOV VF: No 00:34:10.138 Max Data Transfer Size: 131072 00:34:10.138 Max Number of Namespaces: 0 00:34:10.138 Max Number of I/O Queues: 1024 00:34:10.138 NVMe Specification Version (VS): 1.3 00:34:10.138 NVMe Specification Version (Identify): 1.3 00:34:10.138 Maximum Queue Entries: 128 00:34:10.138 Contiguous Queues Required: Yes 00:34:10.139 Arbitration Mechanisms Supported 00:34:10.139 Weighted Round Robin: Not Supported 00:34:10.139 Vendor Specific: Not Supported 00:34:10.139 Reset Timeout: 15000 ms 00:34:10.139 Doorbell Stride: 4 bytes 00:34:10.139 NVM Subsystem Reset: Not Supported 00:34:10.139 Command Sets Supported 00:34:10.139 NVM Command Set: Supported 00:34:10.140 Boot Partition: Not Supported 00:34:10.140 Memory Page Size Minimum: 4096 bytes 00:34:10.140 Memory Page Size Maximum: 4096 bytes 00:34:10.140 Persistent Memory Region: Not Supported 00:34:10.140 Optional Asynchronous Events Supported 00:34:10.140 Namespace Attribute Notices: Not Supported 00:34:10.140 Firmware Activation Notices: Not Supported 00:34:10.140 ANA Change Notices: Not Supported 00:34:10.141 PLE Aggregate Log Change Notices: Not Supported 00:34:10.141 LBA Status Info Alert Notices: Not Supported 00:34:10.141 EGE Aggregate Log Change Notices: Not Supported 00:34:10.141 Normal NVM Subsystem Shutdown event: Not Supported 00:34:10.141 Zone Descriptor Change Notices: Not Supported 00:34:10.141 Discovery Log Change Notices: Supported 00:34:10.141 Controller Attributes 00:34:10.141 128-bit Host Identifier: Not Supported 00:34:10.141 Non-Operational Permissive Mode: Not Supported 00:34:10.142 NVM Sets: Not Supported 00:34:10.142 Read Recovery Levels: Not Supported 00:34:10.142 Endurance Groups: Not Supported 00:34:10.142 Predictable Latency Mode: Not Supported 00:34:10.142 Traffic Based Keep ALive: Not Supported 00:34:10.142 Namespace Granularity: Not Supported 00:34:10.142 SQ Associations: Not Supported 00:34:10.142 UUID List: Not Supported 00:34:10.143 Multi-Domain Subsystem: Not Supported 00:34:10.143 Fixed Capacity Management: Not Supported 00:34:10.143 Variable Capacity Management: Not Supported 00:34:10.143 Delete Endurance Group: Not Supported 00:34:10.143 Delete NVM Set: Not Supported 00:34:10.143 Extended LBA Formats Supported: Not Supported 00:34:10.143 Flexible Data Placement Supported: Not Supported 00:34:10.143 00:34:10.143 Controller Memory Buffer Support 00:34:10.144 ================================ 00:34:10.144 Supported: No 00:34:10.144 00:34:10.144 Persistent Memory Region Support 00:34:10.144 ================================ 00:34:10.144 Supported: No 00:34:10.144 00:34:10.144 Admin Command Set Attributes 00:34:10.144 ============================ 00:34:10.144 Security Send/Receive: Not Supported 00:34:10.144 Format NVM: Not Supported 00:34:10.144 Firmware Activate/Download: Not Supported 00:34:10.145 Namespace Management: Not Supported 00:34:10.145 Device Self-Test: Not Supported 00:34:10.145 Directives: Not Supported 00:34:10.145 NVMe-MI: Not Supported 00:34:10.145 Virtualization Management: Not Supported 00:34:10.145 Doorbell Buffer Config: Not Supported 00:34:10.145 Get LBA Status Capability: Not Supported 00:34:10.146 Command & Feature Lockdown Capability: Not Supported 00:34:10.146 Abort Command Limit: 1 00:34:10.146 Async Event Request Limit: 4 00:34:10.146 Number of Firmware Slots: N/A 00:34:10.146 Firmware Slot 1 Read-Only: N/A 00:34:10.146 Firmware Activation Without Reset: N/A 00:34:10.146 Multiple Update Detection Support: N/A 00:34:10.146 Firmware Update Granularity: No Information Provided 00:34:10.147 Per-Namespace SMART Log: No 00:34:10.147 Asymmetric Namespace Access Log Page: Not Supported 00:34:10.147 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:10.147 Command Effects Log Page: Not Supported 00:34:10.147 Get Log Page Extended Data: Supported 00:34:10.147 Telemetry Log Pages: Not Supported 00:34:10.171 Persistent Event Log Pages: Not Supported 00:34:10.171 Supported Log Pages Log Page: May Support 00:34:10.171 Commands Supported & Effects Log Page: Not Supported 00:34:10.171 Feature Identifiers & Effects Log Page:May Support 00:34:10.171 NVMe-MI Commands & Effects Log Page: May Support 00:34:10.171 Data Area 4 for Telemetry Log: Not Supported 00:34:10.172 Error Log Page Entries Supported: 128 00:34:10.172 Keep Alive: Not Supported 00:34:10.172 00:34:10.172 NVM Command Set Attributes 00:34:10.172 ========================== 00:34:10.172 Submission Queue Entry Size 00:34:10.172 Max: 1 00:34:10.172 Min: 1 00:34:10.172 Completion Queue Entry Size 00:34:10.172 Max: 1 00:34:10.172 Min: 1 00:34:10.172 Number of Namespaces: 0 00:34:10.172 Compare Command: Not Supported 00:34:10.172 Write Uncorrectable Command: Not Supported 00:34:10.173 Dataset Management Command: Not Supported 00:34:10.173 Write Zeroes Command: Not Supported 00:34:10.173 Set Features Save Field: Not Supported 00:34:10.173 Reservations: Not Supported 00:34:10.173 Timestamp: Not Supported 00:34:10.173 Copy: Not Supported 00:34:10.173 Volatile Write Cache: Not Present 00:34:10.173 Atomic Write Unit (Normal): 1 00:34:10.173 Atomic Write Unit (PFail): 1 00:34:10.173 Atomic Compare & Write Unit: 1 00:34:10.173 Fused Compare & Write: Supported 00:34:10.174 Scatter-Gather List 00:34:10.174 SGL Command Set: Supported 00:34:10.174 SGL Keyed: Supported 00:34:10.174 SGL Bit Bucket Descriptor: Not Supported 00:34:10.174 SGL Metadata Pointer: Not Supported 00:34:10.174 Oversized SGL: Not Supported 00:34:10.174 SGL Metadata Address: Not Supported 00:34:10.174 SGL Offset: Supported 00:34:10.174 Transport SGL Data Block: Not Supported 00:34:10.174 Replay Protected Memory Block: Not Supported 00:34:10.174 00:34:10.175 Firmware Slot Information 00:34:10.175 ========================= 00:34:10.175 Active slot: 0 00:34:10.175 00:34:10.175 00:34:10.175 Error Log 00:34:10.175 ========= 00:34:10.175 00:34:10.175 Active Namespaces 00:34:10.175 ================= 00:34:10.175 Discovery Log Page 00:34:10.175 ================== 00:34:10.175 Generation Counter: 2 00:34:10.175 Number of Records: 2 00:34:10.175 Record Format: 0 00:34:10.175 00:34:10.175 Discovery Log Entry 0 00:34:10.175 ---------------------- 00:34:10.175 Transport Type: 3 (TCP) 00:34:10.176 Address Family: 1 (IPv4) 00:34:10.176 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:10.176 Entry Flags: 00:34:10.176 Duplicate Returned Information: 1 00:34:10.176 Explicit Persistent Connection Support for Discovery: 1 00:34:10.176 Transport Requirements: 00:34:10.176 Secure Channel: Not Required 00:34:10.176 Port ID: 0 (0x0000) 00:34:10.176 Controller ID: 65535 (0xffff) 00:34:10.176 Admin Max SQ Size: 128 00:34:10.176 Transport Service Identifier: 4420 00:34:10.177 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:10.177 Transport Address: 10.0.0.2 00:34:10.177 Discovery Log Entry 1 00:34:10.177 ---------------------- 00:34:10.177 Transport Type: 3 (TCP) 00:34:10.178 Address Family: 1 (IPv4) 00:34:10.178 Subsystem Type: 2 (NVM Subsystem) 00:34:10.178 Entry Flags: 00:34:10.178 Duplicate Returned Information: 0 00:34:10.178 Explicit Persistent Connection Support for Discovery: 0 00:34:10.178 Transport Requirements: 00:34:10.178 Secure Channel: Not Required 00:34:10.178 Port ID: 0 (0x0000) 00:34:10.178 Controller ID: 65535 (0xffff) 00:34:10.179 Admin Max SQ Size: 128 00:34:10.179 Transport Service Identifier: 4420 00:34:10.179 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:34:10.179 Transport Address: 10.0.0.2 [2024-12-09 10:43:42.123011] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:34:10.180 [2024-12-09 10:43:42.123050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6100) on tqpair=0x1044690 00:34:10.180 [2024-12-09 10:43:42.123064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.180 [2024-12-09 10:43:42.123073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6280) on tqpair=0x1044690 00:34:10.181 [2024-12-09 10:43:42.123080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.181 [2024-12-09 10:43:42.123088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6400) on tqpair=0x1044690 00:34:10.181 [2024-12-09 10:43:42.123096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.181 [2024-12-09 10:43:42.123104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.182 [2024-12-09 10:43:42.123111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.182 [2024-12-09 10:43:42.123129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.182 [2024-12-09 10:43:42.123138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.183 [2024-12-09 10:43:42.123145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.183 [2024-12-09 10:43:42.123155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.183 [2024-12-09 10:43:42.123181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.183 [2024-12-09 10:43:42.123289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.184 [2024-12-09 10:43:42.123301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.184 [2024-12-09 10:43:42.123308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.184 [2024-12-09 10:43:42.123314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.184 [2024-12-09 10:43:42.123330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.184 [2024-12-09 10:43:42.123338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.185 [2024-12-09 10:43:42.123345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.185 [2024-12-09 10:43:42.123355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.185 [2024-12-09 10:43:42.123382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.186 [2024-12-09 10:43:42.123486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.186 [2024-12-09 10:43:42.123500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.186 [2024-12-09 10:43:42.123506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.186 [2024-12-09 10:43:42.123513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.187 [2024-12-09 10:43:42.123521] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:34:10.187 [2024-12-09 10:43:42.123528] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:34:10.187 [2024-12-09 10:43:42.123545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.188 [2024-12-09 10:43:42.123553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.188 [2024-12-09 10:43:42.123559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.188 [2024-12-09 10:43:42.123570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.188 [2024-12-09 10:43:42.123591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.189 [2024-12-09 10:43:42.123673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.189 [2024-12-09 10:43:42.123686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.189 [2024-12-09 10:43:42.123692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.189 [2024-12-09 10:43:42.123699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.190 [2024-12-09 10:43:42.123715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.190 [2024-12-09 10:43:42.123747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.190 [2024-12-09 10:43:42.123755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.190 [2024-12-09 10:43:42.123766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.191 [2024-12-09 10:43:42.123788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.191 [2024-12-09 10:43:42.123885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.191 [2024-12-09 10:43:42.123897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.191 [2024-12-09 10:43:42.123904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.192 [2024-12-09 10:43:42.123911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.192 [2024-12-09 10:43:42.123927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.192 [2024-12-09 10:43:42.123936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.192 [2024-12-09 10:43:42.123943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.193 [2024-12-09 10:43:42.123953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.193 [2024-12-09 10:43:42.123974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.193 [2024-12-09 10:43:42.124070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.193 [2024-12-09 10:43:42.124082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.194 [2024-12-09 10:43:42.124089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.194 [2024-12-09 10:43:42.124099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.194 [2024-12-09 10:43:42.124116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.194 [2024-12-09 10:43:42.124125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.195 [2024-12-09 10:43:42.124131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.195 [2024-12-09 10:43:42.124141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.195 [2024-12-09 10:43:42.124162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.196 [2024-12-09 10:43:42.124238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.196 [2024-12-09 10:43:42.124250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.196 [2024-12-09 10:43:42.124256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.196 [2024-12-09 10:43:42.124263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.197 [2024-12-09 10:43:42.124278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.197 [2024-12-09 10:43:42.124287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.197 [2024-12-09 10:43:42.124293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.197 [2024-12-09 10:43:42.124304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.198 [2024-12-09 10:43:42.124324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.198 [2024-12-09 10:43:42.124400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.198 [2024-12-09 10:43:42.124412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.198 [2024-12-09 10:43:42.124418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.199 [2024-12-09 10:43:42.124425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.199 [2024-12-09 10:43:42.124440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.199 [2024-12-09 10:43:42.124449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.199 [2024-12-09 10:43:42.124456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.200 [2024-12-09 10:43:42.124466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.200 [2024-12-09 10:43:42.124486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.200 [2024-12-09 10:43:42.124563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.200 [2024-12-09 10:43:42.124575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.201 [2024-12-09 10:43:42.124581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.201 [2024-12-09 10:43:42.124588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.201 [2024-12-09 10:43:42.124603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.201 [2024-12-09 10:43:42.124612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.202 [2024-12-09 10:43:42.124618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.202 [2024-12-09 10:43:42.124628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.202 [2024-12-09 10:43:42.124649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.203 [2024-12-09 10:43:42.128730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.203 [2024-12-09 10:43:42.128747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.203 [2024-12-09 10:43:42.128754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.203 [2024-12-09 10:43:42.128760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.204 [2024-12-09 10:43:42.128783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.204 [2024-12-09 10:43:42.128793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.204 [2024-12-09 10:43:42.128800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1044690) 00:34:10.205 [2024-12-09 10:43:42.128810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.205 [2024-12-09 10:43:42.128832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10a6580, cid 3, qid 0 00:34:10.205 [2024-12-09 10:43:42.128954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.205 [2024-12-09 10:43:42.128966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.205 [2024-12-09 10:43:42.128973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.206 [2024-12-09 10:43:42.128979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10a6580) on tqpair=0x1044690 00:34:10.206 [2024-12-09 10:43:42.128992] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:34:10.207 00:34:10.207 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:34:10.208 [2024-12-09 10:43:42.171826] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.208 [2024-12-09 10:43:42.171876] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184959 ] 00:34:10.209 [2024-12-09 10:43:42.226195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:34:10.209 [2024-12-09 10:43:42.226254] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:10.209 [2024-12-09 10:43:42.226265] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:10.209 [2024-12-09 10:43:42.226287] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:10.210 [2024-12-09 10:43:42.226302] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:10.210 [2024-12-09 10:43:42.230033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:34:10.210 [2024-12-09 10:43:42.230083] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6f1690 0 00:34:10.211 [2024-12-09 10:43:42.230246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:10.211 [2024-12-09 10:43:42.230263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:10.211 [2024-12-09 10:43:42.230275] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:10.211 [2024-12-09 10:43:42.230282] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:10.211 [2024-12-09 10:43:42.230317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.212 [2024-12-09 10:43:42.230329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.212 [2024-12-09 10:43:42.230336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.212 [2024-12-09 10:43:42.230350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:10.213 [2024-12-09 10:43:42.230376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.213 [2024-12-09 10:43:42.236735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.213 [2024-12-09 10:43:42.236760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.213 [2024-12-09 10:43:42.236768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.213 [2024-12-09 10:43:42.236776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.214 [2024-12-09 10:43:42.236797] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:10.214 [2024-12-09 10:43:42.236810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:34:10.214 [2024-12-09 10:43:42.236820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:34:10.215 [2024-12-09 10:43:42.236841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.215 [2024-12-09 10:43:42.236850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.215 [2024-12-09 10:43:42.236857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.216 [2024-12-09 10:43:42.236868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.216 [2024-12-09 10:43:42.236894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.216 [2024-12-09 10:43:42.237070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.216 [2024-12-09 10:43:42.237084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.216 [2024-12-09 10:43:42.237091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.217 [2024-12-09 10:43:42.237097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.217 [2024-12-09 10:43:42.237110] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:34:10.218 [2024-12-09 10:43:42.237125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:34:10.218 [2024-12-09 10:43:42.237138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.218 [2024-12-09 10:43:42.237145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.218 [2024-12-09 10:43:42.237151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.219 [2024-12-09 10:43:42.237162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.219 [2024-12-09 10:43:42.237184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.219 [2024-12-09 10:43:42.237285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.219 [2024-12-09 10:43:42.237299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.220 [2024-12-09 10:43:42.237305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.220 [2024-12-09 10:43:42.237312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.220 [2024-12-09 10:43:42.237321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:34:10.221 [2024-12-09 10:43:42.237335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:34:10.221 [2024-12-09 10:43:42.237347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.223 [2024-12-09 10:43:42.237355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.223 [2024-12-09 10:43:42.237361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.223 [2024-12-09 10:43:42.237371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.223 [2024-12-09 10:43:42.237393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.223 [2024-12-09 10:43:42.237477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.223 [2024-12-09 10:43:42.237490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.223 [2024-12-09 10:43:42.237501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.224 [2024-12-09 10:43:42.237508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.224 [2024-12-09 10:43:42.237516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:10.224 [2024-12-09 10:43:42.237534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.224 [2024-12-09 10:43:42.237543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.224 [2024-12-09 10:43:42.237549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.224 [2024-12-09 10:43:42.237559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.224 [2024-12-09 10:43:42.237581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.225 [2024-12-09 10:43:42.237664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.225 [2024-12-09 10:43:42.237678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.225 [2024-12-09 10:43:42.237684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.225 [2024-12-09 10:43:42.237691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.225 [2024-12-09 10:43:42.237699] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:34:10.228 [2024-12-09 10:43:42.237707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:34:10.229 [2024-12-09 10:43:42.237728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:10.230 [2024-12-09 10:43:42.237857] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:34:10.230 [2024-12-09 10:43:42.237870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:10.230 [2024-12-09 10:43:42.237883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.231 [2024-12-09 10:43:42.237891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.231 [2024-12-09 10:43:42.237898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.231 [2024-12-09 10:43:42.237908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.231 [2024-12-09 10:43:42.237931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.232 [2024-12-09 10:43:42.238043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.232 [2024-12-09 10:43:42.238057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.232 [2024-12-09 10:43:42.238064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.232 [2024-12-09 10:43:42.238070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.233 [2024-12-09 10:43:42.238078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:10.233 [2024-12-09 10:43:42.238095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.233 [2024-12-09 10:43:42.238104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.234 [2024-12-09 10:43:42.238111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.234 [2024-12-09 10:43:42.238121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.234 [2024-12-09 10:43:42.238143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.234 [2024-12-09 10:43:42.238246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.235 [2024-12-09 10:43:42.238263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.235 [2024-12-09 10:43:42.238271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.235 [2024-12-09 10:43:42.238277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.236 [2024-12-09 10:43:42.238285] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:10.236 [2024-12-09 10:43:42.238293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:34:10.237 [2024-12-09 10:43:42.238307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:34:10.237 [2024-12-09 10:43:42.238326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:34:10.237 [2024-12-09 10:43:42.238342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.238 [2024-12-09 10:43:42.238350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.238 [2024-12-09 10:43:42.238360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.239 [2024-12-09 10:43:42.238382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.239 [2024-12-09 10:43:42.238551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.239 [2024-12-09 10:43:42.238565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.239 [2024-12-09 10:43:42.238572] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.240 [2024-12-09 10:43:42.238578] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=4096, cccid=0 00:34:10.240 [2024-12-09 10:43:42.238586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753100) on tqpair(0x6f1690): expected_datao=0, payload_size=4096 00:34:10.240 [2024-12-09 10:43:42.238593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.241 [2024-12-09 10:43:42.238624] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.241 [2024-12-09 10:43:42.238634] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.241 [2024-12-09 10:43:42.279848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.241 [2024-12-09 10:43:42.279868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.242 [2024-12-09 10:43:42.279875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.242 [2024-12-09 10:43:42.279882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.242 [2024-12-09 10:43:42.279903] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:34:10.242 [2024-12-09 10:43:42.279913] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:34:10.242 [2024-12-09 10:43:42.279921] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:34:10.243 [2024-12-09 10:43:42.279929] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:34:10.243 [2024-12-09 10:43:42.279937] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:34:10.243 [2024-12-09 10:43:42.279945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:34:10.243 [2024-12-09 10:43:42.279962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:34:10.244 [2024-12-09 10:43:42.279975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.244 [2024-12-09 10:43:42.279983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.244 [2024-12-09 10:43:42.279989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.244 [2024-12-09 10:43:42.280005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.244 [2024-12-09 10:43:42.280046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.244 [2024-12-09 10:43:42.280168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.245 [2024-12-09 10:43:42.280181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.245 [2024-12-09 10:43:42.280187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.245 [2024-12-09 10:43:42.280194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.245 [2024-12-09 10:43:42.280206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.245 [2024-12-09 10:43:42.280213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.245 [2024-12-09 10:43:42.280219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6f1690) 00:34:10.245 [2024-12-09 10:43:42.280228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.246 [2024-12-09 10:43:42.280239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.246 [2024-12-09 10:43:42.280245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.246 [2024-12-09 10:43:42.280251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6f1690) 00:34:10.246 [2024-12-09 10:43:42.280260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.246 [2024-12-09 10:43:42.280269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.246 [2024-12-09 10:43:42.280275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.247 [2024-12-09 10:43:42.280281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6f1690) 00:34:10.247 [2024-12-09 10:43:42.280289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.247 [2024-12-09 10:43:42.280299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.247 [2024-12-09 10:43:42.280305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.247 [2024-12-09 10:43:42.280311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.247 [2024-12-09 10:43:42.280319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.248 [2024-12-09 10:43:42.280328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:34:10.248 [2024-12-09 10:43:42.280348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:10.248 [2024-12-09 10:43:42.280362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.248 [2024-12-09 10:43:42.280369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.248 [2024-12-09 10:43:42.280379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.249 [2024-12-09 10:43:42.280402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753100, cid 0, qid 0 00:34:10.249 [2024-12-09 10:43:42.280413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753280, cid 1, qid 0 00:34:10.249 [2024-12-09 10:43:42.280421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753400, cid 2, qid 0 00:34:10.249 [2024-12-09 10:43:42.280428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.249 [2024-12-09 10:43:42.280435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.249 [2024-12-09 10:43:42.280604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.250 [2024-12-09 10:43:42.280622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.250 [2024-12-09 10:43:42.280629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.250 [2024-12-09 10:43:42.280636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.250 [2024-12-09 10:43:42.280645] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:34:10.250 [2024-12-09 10:43:42.280654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:34:10.251 [2024-12-09 10:43:42.280669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:34:10.251 [2024-12-09 10:43:42.280681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:34:10.251 [2024-12-09 10:43:42.280691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.251 [2024-12-09 10:43:42.280699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.251 [2024-12-09 10:43:42.280729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.251 [2024-12-09 10:43:42.280741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.252 [2024-12-09 10:43:42.280765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.252 [2024-12-09 10:43:42.280928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.252 [2024-12-09 10:43:42.280943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.252 [2024-12-09 10:43:42.280950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.252 [2024-12-09 10:43:42.280956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.252 [2024-12-09 10:43:42.281041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:34:10.253 [2024-12-09 10:43:42.281064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:34:10.253 [2024-12-09 10:43:42.281080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.253 [2024-12-09 10:43:42.281088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.253 [2024-12-09 10:43:42.281098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.253 [2024-12-09 10:43:42.281119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.253 [2024-12-09 10:43:42.281285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.254 [2024-12-09 10:43:42.281299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.254 [2024-12-09 10:43:42.281306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.254 [2024-12-09 10:43:42.281312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=4096, cccid=4 00:34:10.254 [2024-12-09 10:43:42.281319] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753700) on tqpair(0x6f1690): expected_datao=0, payload_size=4096 00:34:10.254 [2024-12-09 10:43:42.281326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.254 [2024-12-09 10:43:42.281336] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.255 [2024-12-09 10:43:42.281343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.255 [2024-12-09 10:43:42.281355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.255 [2024-12-09 10:43:42.281364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.255 [2024-12-09 10:43:42.281371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.255 [2024-12-09 10:43:42.281377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.255 [2024-12-09 10:43:42.281398] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:34:10.255 [2024-12-09 10:43:42.281419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:34:10.256 [2024-12-09 10:43:42.281439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:34:10.256 [2024-12-09 10:43:42.281452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.256 [2024-12-09 10:43:42.281460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.256 [2024-12-09 10:43:42.281470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.256 [2024-12-09 10:43:42.281492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.256 [2024-12-09 10:43:42.281630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.257 [2024-12-09 10:43:42.281645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.257 [2024-12-09 10:43:42.281651] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.257 [2024-12-09 10:43:42.281657] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=4096, cccid=4 00:34:10.257 [2024-12-09 10:43:42.281664] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753700) on tqpair(0x6f1690): expected_datao=0, payload_size=4096 00:34:10.257 [2024-12-09 10:43:42.281671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.257 [2024-12-09 10:43:42.281681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.257 [2024-12-09 10:43:42.281688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.258 [2024-12-09 10:43:42.281718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.258 [2024-12-09 10:43:42.281737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.258 [2024-12-09 10:43:42.281744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.258 [2024-12-09 10:43:42.281751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.258 [2024-12-09 10:43:42.281776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:34:10.259 [2024-12-09 10:43:42.281798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:34:10.259 [2024-12-09 10:43:42.281813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.259 [2024-12-09 10:43:42.281821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.259 [2024-12-09 10:43:42.281831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.259 [2024-12-09 10:43:42.281854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.259 [2024-12-09 10:43:42.282009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.260 [2024-12-09 10:43:42.282021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.260 [2024-12-09 10:43:42.282028] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.260 [2024-12-09 10:43:42.282034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=4096, cccid=4 00:34:10.260 [2024-12-09 10:43:42.282057] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753700) on tqpair(0x6f1690): expected_datao=0, payload_size=4096 00:34:10.260 [2024-12-09 10:43:42.282064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.260 [2024-12-09 10:43:42.282074] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.260 [2024-12-09 10:43:42.282081] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.261 [2024-12-09 10:43:42.282092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.261 [2024-12-09 10:43:42.282106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.261 [2024-12-09 10:43:42.282113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.261 [2024-12-09 10:43:42.282120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.261 [2024-12-09 10:43:42.282133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:34:10.261 [2024-12-09 10:43:42.282149] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:34:10.262 [2024-12-09 10:43:42.282165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:34:10.262 [2024-12-09 10:43:42.282181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:34:10.262 [2024-12-09 10:43:42.282190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:34:10.262 [2024-12-09 10:43:42.282199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:34:10.263 [2024-12-09 10:43:42.282209] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:34:10.263 [2024-12-09 10:43:42.282217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:34:10.263 [2024-12-09 10:43:42.282225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:34:10.263 [2024-12-09 10:43:42.282245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.263 [2024-12-09 10:43:42.282253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.263 [2024-12-09 10:43:42.282263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.264 [2024-12-09 10:43:42.282274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.264 [2024-12-09 10:43:42.282281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.264 [2024-12-09 10:43:42.282287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f1690) 00:34:10.264 [2024-12-09 10:43:42.282295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.264 [2024-12-09 10:43:42.282321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.264 [2024-12-09 10:43:42.282333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753880, cid 5, qid 0 00:34:10.265 [2024-12-09 10:43:42.282492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.265 [2024-12-09 10:43:42.282506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.265 [2024-12-09 10:43:42.282512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.265 [2024-12-09 10:43:42.282519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.265 [2024-12-09 10:43:42.282528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.265 [2024-12-09 10:43:42.282537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.265 [2024-12-09 10:43:42.282544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.266 [2024-12-09 10:43:42.282550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753880) on tqpair=0x6f1690 00:34:10.266 [2024-12-09 10:43:42.282566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.266 [2024-12-09 10:43:42.282574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f1690) 00:34:10.266 [2024-12-09 10:43:42.282584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.266 [2024-12-09 10:43:42.282610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753880, cid 5, qid 0 00:34:10.266 [2024-12-09 10:43:42.282708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.267 [2024-12-09 10:43:42.282750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.267 [2024-12-09 10:43:42.282759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.267 [2024-12-09 10:43:42.282766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753880) on tqpair=0x6f1690 00:34:10.267 [2024-12-09 10:43:42.282784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.267 [2024-12-09 10:43:42.282793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f1690) 00:34:10.267 [2024-12-09 10:43:42.282803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.268 [2024-12-09 10:43:42.282825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753880, cid 5, qid 0 00:34:10.268 [2024-12-09 10:43:42.282917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.268 [2024-12-09 10:43:42.282929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.268 [2024-12-09 10:43:42.282936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.268 [2024-12-09 10:43:42.282943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753880) on tqpair=0x6f1690 00:34:10.268 [2024-12-09 10:43:42.282959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.268 [2024-12-09 10:43:42.282968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f1690) 00:34:10.269 [2024-12-09 10:43:42.282978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.269 [2024-12-09 10:43:42.282999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753880, cid 5, qid 0 00:34:10.269 [2024-12-09 10:43:42.283101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.269 [2024-12-09 10:43:42.283115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.269 [2024-12-09 10:43:42.283122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.269 [2024-12-09 10:43:42.283128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753880) on tqpair=0x6f1690 00:34:10.269 [2024-12-09 10:43:42.283153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.270 [2024-12-09 10:43:42.283163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6f1690) 00:34:10.270 [2024-12-09 10:43:42.283174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.270 [2024-12-09 10:43:42.283186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.270 [2024-12-09 10:43:42.283193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6f1690) 00:34:10.270 [2024-12-09 10:43:42.283202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.271 [2024-12-09 10:43:42.283214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.271 [2024-12-09 10:43:42.283221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6f1690) 00:34:10.271 [2024-12-09 10:43:42.283230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.271 [2024-12-09 10:43:42.283242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.271 [2024-12-09 10:43:42.283249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6f1690) 00:34:10.272 [2024-12-09 10:43:42.283258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.272 [2024-12-09 10:43:42.283286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753880, cid 5, qid 0 00:34:10.272 [2024-12-09 10:43:42.283297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753700, cid 4, qid 0 00:34:10.272 [2024-12-09 10:43:42.283305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753a00, cid 6, qid 0 00:34:10.272 [2024-12-09 10:43:42.283312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753b80, cid 7, qid 0 00:34:10.272 [2024-12-09 10:43:42.283532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.273 [2024-12-09 10:43:42.283544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.273 [2024-12-09 10:43:42.283550] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.273 [2024-12-09 10:43:42.283556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=8192, cccid=5 00:34:10.273 [2024-12-09 10:43:42.283563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753880) on tqpair(0x6f1690): expected_datao=0, payload_size=8192 00:34:10.273 [2024-12-09 10:43:42.283570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.273 [2024-12-09 10:43:42.283588] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.273 [2024-12-09 10:43:42.283596] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.274 [2024-12-09 10:43:42.283609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.274 [2024-12-09 10:43:42.283619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.274 [2024-12-09 10:43:42.283626] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.274 [2024-12-09 10:43:42.283631] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=512, cccid=4 00:34:10.274 [2024-12-09 10:43:42.283638] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753700) on tqpair(0x6f1690): expected_datao=0, payload_size=512 00:34:10.274 [2024-12-09 10:43:42.283645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.274 [2024-12-09 10:43:42.283654] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.275 [2024-12-09 10:43:42.283661] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.275 [2024-12-09 10:43:42.283669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.275 [2024-12-09 10:43:42.283677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.275 [2024-12-09 10:43:42.283684] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.275 [2024-12-09 10:43:42.283689] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=512, cccid=6 00:34:10.275 [2024-12-09 10:43:42.283696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753a00) on tqpair(0x6f1690): expected_datao=0, payload_size=512 00:34:10.275 [2024-12-09 10:43:42.283718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.276 [2024-12-09 10:43:42.287741] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.276 [2024-12-09 10:43:42.287751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.276 [2024-12-09 10:43:42.287760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:10.276 [2024-12-09 10:43:42.287769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:10.276 [2024-12-09 10:43:42.287775] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:10.276 [2024-12-09 10:43:42.287782] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6f1690): datao=0, datal=4096, cccid=7 00:34:10.276 [2024-12-09 10:43:42.287789] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x753b80) on tqpair(0x6f1690): expected_datao=0, payload_size=4096 00:34:10.277 [2024-12-09 10:43:42.287797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.277 [2024-12-09 10:43:42.287806] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:10.277 [2024-12-09 10:43:42.287813] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:10.277 [2024-12-09 10:43:42.287826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.277 [2024-12-09 10:43:42.287836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.277 [2024-12-09 10:43:42.287847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.277 [2024-12-09 10:43:42.287854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753880) on tqpair=0x6f1690 00:34:10.278 [2024-12-09 10:43:42.287874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.278 [2024-12-09 10:43:42.287886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.278 [2024-12-09 10:43:42.287892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.278 [2024-12-09 10:43:42.287899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753700) on tqpair=0x6f1690 00:34:10.278 [2024-12-09 10:43:42.287915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.278 [2024-12-09 10:43:42.287926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.278 [2024-12-09 10:43:42.287933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.279 [2024-12-09 10:43:42.287939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753a00) on tqpair=0x6f1690 00:34:10.279 [2024-12-09 10:43:42.287950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.279 [2024-12-09 10:43:42.287960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.279 [2024-12-09 10:43:42.287966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.279 [2024-12-09 10:43:42.287973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753b80) on tqpair=0x6f1690 00:34:10.279 ===================================================== 00:34:10.279 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.280 ===================================================== 00:34:10.280 Controller Capabilities/Features 00:34:10.280 ================================ 00:34:10.280 Vendor ID: 8086 00:34:10.280 Subsystem Vendor ID: 8086 00:34:10.280 Serial Number: SPDK00000000000001 00:34:10.280 Model Number: SPDK bdev Controller 00:34:10.280 Firmware Version: 25.01 00:34:10.280 Recommended Arb Burst: 6 00:34:10.280 IEEE OUI Identifier: e4 d2 5c 00:34:10.280 Multi-path I/O 00:34:10.280 May have multiple subsystem ports: Yes 00:34:10.280 May have multiple controllers: Yes 00:34:10.280 Associated with SR-IOV VF: No 00:34:10.280 Max Data Transfer Size: 131072 00:34:10.280 Max Number of Namespaces: 32 00:34:10.281 Max Number of I/O Queues: 127 00:34:10.281 NVMe Specification Version (VS): 1.3 00:34:10.281 NVMe Specification Version (Identify): 1.3 00:34:10.281 Maximum Queue Entries: 128 00:34:10.281 Contiguous Queues Required: Yes 00:34:10.281 Arbitration Mechanisms Supported 00:34:10.281 Weighted Round Robin: Not Supported 00:34:10.281 Vendor Specific: Not Supported 00:34:10.281 Reset Timeout: 15000 ms 00:34:10.281 Doorbell Stride: 4 bytes 00:34:10.281 NVM Subsystem Reset: Not Supported 00:34:10.281 Command Sets Supported 00:34:10.281 NVM Command Set: Supported 00:34:10.281 Boot Partition: Not Supported 00:34:10.282 Memory Page Size Minimum: 4096 bytes 00:34:10.282 Memory Page Size Maximum: 4096 bytes 00:34:10.282 Persistent Memory Region: Not Supported 00:34:10.282 Optional Asynchronous Events Supported 00:34:10.282 Namespace Attribute Notices: Supported 00:34:10.282 Firmware Activation Notices: Not Supported 00:34:10.282 ANA Change Notices: Not Supported 00:34:10.282 PLE Aggregate Log Change Notices: Not Supported 00:34:10.282 LBA Status Info Alert Notices: Not Supported 00:34:10.282 EGE Aggregate Log Change Notices: Not Supported 00:34:10.282 Normal NVM Subsystem Shutdown event: Not Supported 00:34:10.282 Zone Descriptor Change Notices: Not Supported 00:34:10.282 Discovery Log Change Notices: Not Supported 00:34:10.282 Controller Attributes 00:34:10.283 128-bit Host Identifier: Supported 00:34:10.283 Non-Operational Permissive Mode: Not Supported 00:34:10.283 NVM Sets: Not Supported 00:34:10.283 Read Recovery Levels: Not Supported 00:34:10.283 Endurance Groups: Not Supported 00:34:10.283 Predictable Latency Mode: Not Supported 00:34:10.283 Traffic Based Keep ALive: Not Supported 00:34:10.283 Namespace Granularity: Not Supported 00:34:10.283 SQ Associations: Not Supported 00:34:10.283 UUID List: Not Supported 00:34:10.283 Multi-Domain Subsystem: Not Supported 00:34:10.283 Fixed Capacity Management: Not Supported 00:34:10.283 Variable Capacity Management: Not Supported 00:34:10.284 Delete Endurance Group: Not Supported 00:34:10.284 Delete NVM Set: Not Supported 00:34:10.284 Extended LBA Formats Supported: Not Supported 00:34:10.284 Flexible Data Placement Supported: Not Supported 00:34:10.284 00:34:10.284 Controller Memory Buffer Support 00:34:10.284 ================================ 00:34:10.284 Supported: No 00:34:10.284 00:34:10.284 Persistent Memory Region Support 00:34:10.284 ================================ 00:34:10.284 Supported: No 00:34:10.284 00:34:10.284 Admin Command Set Attributes 00:34:10.284 ============================ 00:34:10.284 Security Send/Receive: Not Supported 00:34:10.285 Format NVM: Not Supported 00:34:10.285 Firmware Activate/Download: Not Supported 00:34:10.285 Namespace Management: Not Supported 00:34:10.285 Device Self-Test: Not Supported 00:34:10.285 Directives: Not Supported 00:34:10.285 NVMe-MI: Not Supported 00:34:10.285 Virtualization Management: Not Supported 00:34:10.285 Doorbell Buffer Config: Not Supported 00:34:10.285 Get LBA Status Capability: Not Supported 00:34:10.285 Command & Feature Lockdown Capability: Not Supported 00:34:10.285 Abort Command Limit: 4 00:34:10.285 Async Event Request Limit: 4 00:34:10.285 Number of Firmware Slots: N/A 00:34:10.286 Firmware Slot 1 Read-Only: N/A 00:34:10.286 Firmware Activation Without Reset: N/A 00:34:10.286 Multiple Update Detection Support: N/A 00:34:10.286 Firmware Update Granularity: No Information Provided 00:34:10.286 Per-Namespace SMART Log: No 00:34:10.286 Asymmetric Namespace Access Log Page: Not Supported 00:34:10.286 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:34:10.286 Command Effects Log Page: Supported 00:34:10.286 Get Log Page Extended Data: Supported 00:34:10.286 Telemetry Log Pages: Not Supported 00:34:10.286 Persistent Event Log Pages: Not Supported 00:34:10.286 Supported Log Pages Log Page: May Support 00:34:10.286 Commands Supported & Effects Log Page: Not Supported 00:34:10.287 Feature Identifiers & Effects Log Page:May Support 00:34:10.287 NVMe-MI Commands & Effects Log Page: May Support 00:34:10.287 Data Area 4 for Telemetry Log: Not Supported 00:34:10.287 Error Log Page Entries Supported: 128 00:34:10.287 Keep Alive: Supported 00:34:10.287 Keep Alive Granularity: 10000 ms 00:34:10.287 00:34:10.287 NVM Command Set Attributes 00:34:10.287 ========================== 00:34:10.287 Submission Queue Entry Size 00:34:10.287 Max: 64 00:34:10.287 Min: 64 00:34:10.287 Completion Queue Entry Size 00:34:10.287 Max: 16 00:34:10.287 Min: 16 00:34:10.287 Number of Namespaces: 32 00:34:10.287 Compare Command: Supported 00:34:10.287 Write Uncorrectable Command: Not Supported 00:34:10.287 Dataset Management Command: Supported 00:34:10.288 Write Zeroes Command: Supported 00:34:10.288 Set Features Save Field: Not Supported 00:34:10.288 Reservations: Supported 00:34:10.288 Timestamp: Not Supported 00:34:10.288 Copy: Supported 00:34:10.288 Volatile Write Cache: Present 00:34:10.288 Atomic Write Unit (Normal): 1 00:34:10.288 Atomic Write Unit (PFail): 1 00:34:10.288 Atomic Compare & Write Unit: 1 00:34:10.288 Fused Compare & Write: Supported 00:34:10.288 Scatter-Gather List 00:34:10.288 SGL Command Set: Supported 00:34:10.288 SGL Keyed: Supported 00:34:10.288 SGL Bit Bucket Descriptor: Not Supported 00:34:10.288 SGL Metadata Pointer: Not Supported 00:34:10.288 Oversized SGL: Not Supported 00:34:10.288 SGL Metadata Address: Not Supported 00:34:10.289 SGL Offset: Supported 00:34:10.289 Transport SGL Data Block: Not Supported 00:34:10.289 Replay Protected Memory Block: Not Supported 00:34:10.289 00:34:10.289 Firmware Slot Information 00:34:10.289 ========================= 00:34:10.289 Active slot: 1 00:34:10.289 Slot 1 Firmware Revision: 25.01 00:34:10.289 00:34:10.289 00:34:10.289 Commands Supported and Effects 00:34:10.289 ============================== 00:34:10.289 Admin Commands 00:34:10.289 -------------- 00:34:10.289 Get Log Page (02h): Supported 00:34:10.289 Identify (06h): Supported 00:34:10.289 Abort (08h): Supported 00:34:10.289 Set Features (09h): Supported 00:34:10.289 Get Features (0Ah): Supported 00:34:10.290 Asynchronous Event Request (0Ch): Supported 00:34:10.290 Keep Alive (18h): Supported 00:34:10.290 I/O Commands 00:34:10.290 ------------ 00:34:10.290 Flush (00h): Supported LBA-Change 00:34:10.290 Write (01h): Supported LBA-Change 00:34:10.290 Read (02h): Supported 00:34:10.290 Compare (05h): Supported 00:34:10.290 Write Zeroes (08h): Supported LBA-Change 00:34:10.290 Dataset Management (09h): Supported LBA-Change 00:34:10.290 Copy (19h): Supported LBA-Change 00:34:10.290 00:34:10.290 Error Log 00:34:10.290 ========= 00:34:10.290 00:34:10.290 Arbitration 00:34:10.290 =========== 00:34:10.290 Arbitration Burst: 1 00:34:10.290 00:34:10.291 Power Management 00:34:10.291 ================ 00:34:10.291 Number of Power States: 1 00:34:10.291 Current Power State: Power State #0 00:34:10.291 Power State #0: 00:34:10.291 Max Power: 0.00 W 00:34:10.291 Non-Operational State: Operational 00:34:10.291 Entry Latency: Not Reported 00:34:10.291 Exit Latency: Not Reported 00:34:10.291 Relative Read Throughput: 0 00:34:10.291 Relative Read Latency: 0 00:34:10.291 Relative Write Throughput: 0 00:34:10.291 Relative Write Latency: 0 00:34:10.291 Idle Power: Not Reported 00:34:10.291 Active Power: Not Reported 00:34:10.291 Non-Operational Permissive Mode: Not Supported 00:34:10.291 00:34:10.291 Health Information 00:34:10.291 ================== 00:34:10.292 Critical Warnings: 00:34:10.292 Available Spare Space: OK 00:34:10.292 Temperature: OK 00:34:10.292 Device Reliability: OK 00:34:10.292 Read Only: No 00:34:10.292 Volatile Memory Backup: OK 00:34:10.292 Current Temperature: 0 Kelvin (-273 Celsius) 00:34:10.292 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:34:10.292 Available Spare: 0% 00:34:10.292 Available Spare Threshold: 0% 00:34:10.292 Life Percentage Used:[2024-12-09 10:43:42.288103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.292 [2024-12-09 10:43:42.288115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6f1690) 00:34:10.293 [2024-12-09 10:43:42.288125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.293 [2024-12-09 10:43:42.288149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753b80, cid 7, qid 0 00:34:10.293 [2024-12-09 10:43:42.288301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.293 [2024-12-09 10:43:42.288315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.293 [2024-12-09 10:43:42.288322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.293 [2024-12-09 10:43:42.288328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753b80) on tqpair=0x6f1690 00:34:10.294 [2024-12-09 10:43:42.288375] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:34:10.294 [2024-12-09 10:43:42.288396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753100) on tqpair=0x6f1690 00:34:10.294 [2024-12-09 10:43:42.288408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.294 [2024-12-09 10:43:42.288416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753280) on tqpair=0x6f1690 00:34:10.294 [2024-12-09 10:43:42.288424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.294 [2024-12-09 10:43:42.288431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753400) on tqpair=0x6f1690 00:34:10.295 [2024-12-09 10:43:42.288438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.295 [2024-12-09 10:43:42.288446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.295 [2024-12-09 10:43:42.288453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.295 [2024-12-09 10:43:42.288466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.295 [2024-12-09 10:43:42.288474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.295 [2024-12-09 10:43:42.288480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.296 [2024-12-09 10:43:42.288490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.296 [2024-12-09 10:43:42.288520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.296 [2024-12-09 10:43:42.288669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.296 [2024-12-09 10:43:42.288683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.296 [2024-12-09 10:43:42.288689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.296 [2024-12-09 10:43:42.288696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.296 [2024-12-09 10:43:42.288729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.297 [2024-12-09 10:43:42.288739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.297 [2024-12-09 10:43:42.288745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.297 [2024-12-09 10:43:42.288756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.297 [2024-12-09 10:43:42.288785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.297 [2024-12-09 10:43:42.288902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.297 [2024-12-09 10:43:42.288916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.298 [2024-12-09 10:43:42.288923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.298 [2024-12-09 10:43:42.288929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.298 [2024-12-09 10:43:42.288938] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:34:10.298 [2024-12-09 10:43:42.288946] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:34:10.298 [2024-12-09 10:43:42.288962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.299 [2024-12-09 10:43:42.288972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.299 [2024-12-09 10:43:42.288978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.299 [2024-12-09 10:43:42.288988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.299 [2024-12-09 10:43:42.289025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.299 [2024-12-09 10:43:42.289103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.299 [2024-12-09 10:43:42.289117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.300 [2024-12-09 10:43:42.289123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.300 [2024-12-09 10:43:42.289130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.300 [2024-12-09 10:43:42.289146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.300 [2024-12-09 10:43:42.289155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.300 [2024-12-09 10:43:42.289161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.300 [2024-12-09 10:43:42.289171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.301 [2024-12-09 10:43:42.289193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.301 [2024-12-09 10:43:42.289294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.301 [2024-12-09 10:43:42.289306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.301 [2024-12-09 10:43:42.289312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.301 [2024-12-09 10:43:42.289319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.301 [2024-12-09 10:43:42.289334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.301 [2024-12-09 10:43:42.289343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.302 [2024-12-09 10:43:42.289349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.302 [2024-12-09 10:43:42.289364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.302 [2024-12-09 10:43:42.289387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.302 [2024-12-09 10:43:42.289469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.302 [2024-12-09 10:43:42.289483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.302 [2024-12-09 10:43:42.289489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.303 [2024-12-09 10:43:42.289496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.303 [2024-12-09 10:43:42.289511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.303 [2024-12-09 10:43:42.289520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.303 [2024-12-09 10:43:42.289526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.303 [2024-12-09 10:43:42.289536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.303 [2024-12-09 10:43:42.289557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.304 [2024-12-09 10:43:42.289644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.304 [2024-12-09 10:43:42.289656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.304 [2024-12-09 10:43:42.289662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.304 [2024-12-09 10:43:42.289669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.304 [2024-12-09 10:43:42.289685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.304 [2024-12-09 10:43:42.289693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.304 [2024-12-09 10:43:42.289716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.304 [2024-12-09 10:43:42.289736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.305 [2024-12-09 10:43:42.289760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.305 [2024-12-09 10:43:42.289848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.305 [2024-12-09 10:43:42.289863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.305 [2024-12-09 10:43:42.289869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.306 [2024-12-09 10:43:42.289876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.306 [2024-12-09 10:43:42.289893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.306 [2024-12-09 10:43:42.289902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.306 [2024-12-09 10:43:42.289909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.306 [2024-12-09 10:43:42.289919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.307 [2024-12-09 10:43:42.289941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.307 [2024-12-09 10:43:42.290051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.307 [2024-12-09 10:43:42.290065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.307 [2024-12-09 10:43:42.290072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.307 [2024-12-09 10:43:42.290078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.307 [2024-12-09 10:43:42.290095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.307 [2024-12-09 10:43:42.290104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.307 [2024-12-09 10:43:42.290110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.308 [2024-12-09 10:43:42.290120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.308 [2024-12-09 10:43:42.290146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.308 [2024-12-09 10:43:42.290232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.308 [2024-12-09 10:43:42.290246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.308 [2024-12-09 10:43:42.290252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.308 [2024-12-09 10:43:42.290259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.309 [2024-12-09 10:43:42.290274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.309 [2024-12-09 10:43:42.290283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.309 [2024-12-09 10:43:42.290289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.309 [2024-12-09 10:43:42.290299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.309 [2024-12-09 10:43:42.290321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.309 [2024-12-09 10:43:42.290404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.310 [2024-12-09 10:43:42.290417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.310 [2024-12-09 10:43:42.290424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.310 [2024-12-09 10:43:42.290430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.310 [2024-12-09 10:43:42.290445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.310 [2024-12-09 10:43:42.290454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.310 [2024-12-09 10:43:42.290460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.310 [2024-12-09 10:43:42.290470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.311 [2024-12-09 10:43:42.290491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.311 [2024-12-09 10:43:42.290575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.311 [2024-12-09 10:43:42.290589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.311 [2024-12-09 10:43:42.290595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.311 [2024-12-09 10:43:42.290602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.311 [2024-12-09 10:43:42.290618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.311 [2024-12-09 10:43:42.290627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.311 [2024-12-09 10:43:42.290633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.312 [2024-12-09 10:43:42.290643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.312 [2024-12-09 10:43:42.290664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.312 [2024-12-09 10:43:42.290772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.312 [2024-12-09 10:43:42.290787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.312 [2024-12-09 10:43:42.290794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.312 [2024-12-09 10:43:42.290800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.312 [2024-12-09 10:43:42.290816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.312 [2024-12-09 10:43:42.290825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.312 [2024-12-09 10:43:42.290831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.312 [2024-12-09 10:43:42.290842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.313 [2024-12-09 10:43:42.290864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.347 [2024-12-09 10:43:42.290964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.347 [2024-12-09 10:43:42.290977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.347 [2024-12-09 10:43:42.290984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.347 [2024-12-09 10:43:42.290991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.347 [2024-12-09 10:43:42.291023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.347 [2024-12-09 10:43:42.291032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.347 [2024-12-09 10:43:42.291039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.348 [2024-12-09 10:43:42.291049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.348 [2024-12-09 10:43:42.291070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.348 [2024-12-09 10:43:42.291158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.348 [2024-12-09 10:43:42.291170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.348 [2024-12-09 10:43:42.291177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.348 [2024-12-09 10:43:42.291183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.348 [2024-12-09 10:43:42.291199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.348 [2024-12-09 10:43:42.291208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.348 [2024-12-09 10:43:42.291214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.348 [2024-12-09 10:43:42.291224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.348 [2024-12-09 10:43:42.291245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.348 [2024-12-09 10:43:42.291326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.348 [2024-12-09 10:43:42.291339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.349 [2024-12-09 10:43:42.291346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.349 [2024-12-09 10:43:42.291367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.349 [2024-12-09 10:43:42.291392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.349 [2024-12-09 10:43:42.291413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.349 [2024-12-09 10:43:42.291507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.349 [2024-12-09 10:43:42.291520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.349 [2024-12-09 10:43:42.291527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.349 [2024-12-09 10:43:42.291548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.349 [2024-12-09 10:43:42.291574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.349 [2024-12-09 10:43:42.291595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.349 [2024-12-09 10:43:42.291688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.349 [2024-12-09 10:43:42.291705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.349 [2024-12-09 10:43:42.291713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.291719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.349 [2024-12-09 10:43:42.295753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.295764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:10.349 [2024-12-09 10:43:42.295770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6f1690) 00:34:10.350 [2024-12-09 10:43:42.295781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.350 [2024-12-09 10:43:42.295805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x753580, cid 3, qid 0 00:34:10.350 [2024-12-09 10:43:42.295953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:10.350 [2024-12-09 10:43:42.295967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:10.350 [2024-12-09 10:43:42.295974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:10.350 [2024-12-09 10:43:42.295980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x753580) on tqpair=0x6f1690 00:34:10.350 [2024-12-09 10:43:42.295993] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:34:10.350 0% 00:34:10.350 Data Units Read: 0 00:34:10.350 Data Units Written: 0 00:34:10.350 Host Read Commands: 0 00:34:10.350 Host Write Commands: 0 00:34:10.350 Controller Busy Time: 0 minutes 00:34:10.350 Power Cycles: 0 00:34:10.350 Power On Hours: 0 hours 00:34:10.350 Unsafe Shutdowns: 0 00:34:10.350 Unrecoverable Media Errors: 0 00:34:10.350 Lifetime Error Log Entries: 0 00:34:10.350 Warning Temperature Time: 0 minutes 00:34:10.350 Critical Temperature Time: 0 minutes 00:34:10.350 00:34:10.350 Number of Queues 00:34:10.350 ================ 00:34:10.350 Number of I/O Submission Queues: 127 00:34:10.350 Number of I/O Completion Queues: 127 00:34:10.350 00:34:10.350 Active Namespaces 00:34:10.350 ================= 00:34:10.350 Namespace ID:1 00:34:10.350 Error Recovery Timeout: Unlimited 00:34:10.350 Command Set Identifier: NVM (00h) 00:34:10.350 Deallocate: Supported 00:34:10.350 Deallocated/Unwritten Error: Not Supported 00:34:10.350 Deallocated Read Value: Unknown 00:34:10.350 Deallocate in Write Zeroes: Not Supported 00:34:10.350 Deallocated Guard Field: 0xFFFF 00:34:10.350 Flush: Supported 00:34:10.350 Reservation: Supported 00:34:10.350 Namespace Sharing Capabilities: Multiple Controllers 00:34:10.350 Size (in LBAs): 131072 (0GiB) 00:34:10.350 Capacity (in LBAs): 131072 (0GiB) 00:34:10.351 Utilization (in LBAs): 131072 (0GiB) 00:34:10.351 NGUID: ABCDEF0123456789ABCDEF0123456789 00:34:10.351 EUI64: ABCDEF0123456789 00:34:10.351 UUID: c0cf6cb7-c9a9-49e5-b57c-08310c2d0a13 00:34:10.351 Thin Provisioning: Not Supported 00:34:10.351 Per-NS Atomic Units: Yes 00:34:10.351 Atomic Boundary Size (Normal): 0 00:34:10.351 Atomic Boundary Size (PFail): 0 00:34:10.351 Atomic Boundary Offset: 0 00:34:10.351 Maximum Single Source Range Length: 65535 00:34:10.351 Maximum Copy Length: 65535 00:34:10.351 Maximum Source Range Count: 1 00:34:10.351 NGUID/EUI64 Never Reused: No 00:34:10.351 Namespace Write Protected: No 00:34:10.351 Number of LBA Formats: 1 00:34:10.351 Current LBA Format: LBA Format #00 00:34:10.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:10.351 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.351 rmmod nvme_tcp 00:34:10.351 rmmod nvme_fabrics 00:34:10.351 rmmod nvme_keyring 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:34:10.351 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2184805 ']' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2184805 ']' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184805' 00:34:10.352 killing process with pid 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2184805 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.352 10:43:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.352 10:43:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.352 00:34:10.352 real 0m6.863s 00:34:10.352 user 0m5.776s 00:34:10.352 sys 0m2.841s 00:34:10.352 10:43:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.352 10:43:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:10.353 ************************************ 00:34:10.353 END TEST nvmf_identify 00:34:10.353 ************************************ 00:34:10.353 10:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:10.353 10:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:10.353 10:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.353 10:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.353 ************************************ 00:34:10.353 START TEST nvmf_perf 00:34:10.353 ************************************ 00:34:10.353 10:43:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:10.353 * Looking for test storage... 00:34:10.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.353 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.354 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.355 --rc genhtml_branch_coverage=1 00:34:10.355 --rc genhtml_function_coverage=1 00:34:10.355 --rc genhtml_legend=1 00:34:10.355 --rc geninfo_all_blocks=1 00:34:10.355 --rc geninfo_unexecuted_blocks=1 00:34:10.355 00:34:10.355 ' 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.355 --rc genhtml_branch_coverage=1 00:34:10.355 --rc genhtml_function_coverage=1 00:34:10.355 --rc genhtml_legend=1 00:34:10.355 --rc geninfo_all_blocks=1 00:34:10.355 --rc geninfo_unexecuted_blocks=1 00:34:10.355 00:34:10.355 ' 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.355 --rc genhtml_branch_coverage=1 00:34:10.355 --rc genhtml_function_coverage=1 00:34:10.355 --rc genhtml_legend=1 00:34:10.355 --rc geninfo_all_blocks=1 00:34:10.355 --rc geninfo_unexecuted_blocks=1 00:34:10.355 00:34:10.355 ' 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:10.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.355 --rc genhtml_branch_coverage=1 00:34:10.355 --rc genhtml_function_coverage=1 00:34:10.355 --rc genhtml_legend=1 00:34:10.355 --rc geninfo_all_blocks=1 00:34:10.355 --rc geninfo_unexecuted_blocks=1 00:34:10.355 00:34:10.355 ' 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.355 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.356 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.357 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.359 10:43:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.359 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.359 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.360 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:10.361 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.361 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:10.362 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.362 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.363 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:10.364 Found net devices under 0000:84:00.0: cvl_0_0 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:10.364 Found net devices under 0000:84:00.1: cvl_0_1 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.364 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:34:10.365 00:34:10.365 --- 10.0.0.2 ping statistics --- 00:34:10.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.365 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:34:10.365 00:34:10.365 --- 10.0.0.1 ping statistics --- 00:34:10.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.365 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.365 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2187037 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2187037 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2187037 ']' 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.366 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.366 [2024-12-09 10:43:48.392395] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.366 [2024-12-09 10:43:48.392510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.366 [2024-12-09 10:43:48.532578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.367 [2024-12-09 10:43:48.644895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.367 [2024-12-09 10:43:48.645002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.367 [2024-12-09 10:43:48.645039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.367 [2024-12-09 10:43:48.645069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.367 [2024-12-09 10:43:48.645096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.367 [2024-12-09 10:43:48.648094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.368 [2024-12-09 10:43:48.648191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.368 [2024-12-09 10:43:48.648246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.368 [2024-12-09 10:43:48.648251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:10.368 10:43:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:10.368 10:43:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:34:10.368 10:43:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:34:10.368 10:43:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:10.368 [2024-12-09 10:43:53.152380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.368 10:43:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.368 10:43:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:10.369 10:43:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:10.369 10:43:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:10.369 10:43:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:10.369 10:43:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.369 [2024-12-09 10:43:54.894573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.369 10:43:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.369 10:43:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:34:10.369 10:43:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:34:10.369 10:43:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:34:10.369 10:43:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:34:10.369 Initializing NVMe Controllers 00:34:10.369 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:34:10.369 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:34:10.369 Initialization complete. Launching workers. 00:34:10.369 ======================================================== 00:34:10.369 Latency(us) 00:34:10.369 Device Information : IOPS MiB/s Average min max 00:34:10.369 PCIE (0000:82:00.0) NSID 1 from core 0: 84372.21 329.58 378.78 32.51 7236.35 00:34:10.369 ======================================================== 00:34:10.369 Total : 84372.21 329.58 378.78 32.51 7236.35 00:34:10.369 00:34:10.369 10:43:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.369 Initializing NVMe Controllers 00:34:10.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:10.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:10.369 Initialization complete. Launching workers. 00:34:10.369 ======================================================== 00:34:10.370 Latency(us) 00:34:10.370 Device Information : IOPS MiB/s Average min max 00:34:10.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11278.40 137.71 46533.93 00:34:10.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 75.00 0.29 13834.37 7952.10 47898.62 00:34:10.370 ======================================================== 00:34:10.370 Total : 167.00 0.65 12426.29 137.71 47898.62 00:34:10.370 00:34:10.370 10:43:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.370 Initializing NVMe Controllers 00:34:10.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:10.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:10.370 Initialization complete. Launching workers. 00:34:10.370 ======================================================== 00:34:10.370 Latency(us) 00:34:10.370 Device Information : IOPS MiB/s Average min max 00:34:10.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8623.48 33.69 3709.28 593.90 10679.50 00:34:10.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3754.33 14.67 8554.63 6463.40 18706.47 00:34:10.370 ======================================================== 00:34:10.370 Total : 12377.81 48.35 5178.92 593.90 18706.47 00:34:10.370 00:34:10.370 10:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:34:10.370 10:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:34:10.370 10:43:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:10.370 Initializing NVMe Controllers 00:34:10.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.371 Controller IO queue size 128, less than required. 00:34:10.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.371 Controller IO queue size 128, less than required. 00:34:10.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:10.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:10.371 Initialization complete. Launching workers. 00:34:10.371 ======================================================== 00:34:10.371 Latency(us) 00:34:10.371 Device Information : IOPS MiB/s Average min max 00:34:10.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1312.33 328.08 100411.91 53806.82 166646.88 00:34:10.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.27 143.07 230188.16 89352.79 355895.69 00:34:10.371 ======================================================== 00:34:10.371 Total : 1884.60 471.15 139819.32 53806.82 355895.69 00:34:10.371 00:34:10.371 10:44:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:34:10.371 No valid NVMe controllers or AIO or URING devices found 00:34:10.371 Initializing NVMe Controllers 00:34:10.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.371 Controller IO queue size 128, less than required. 00:34:10.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.371 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:34:10.371 Controller IO queue size 128, less than required. 00:34:10.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.371 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:34:10.371 WARNING: Some requested NVMe devices were skipped 00:34:10.372 10:44:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:34:10.372 Initializing NVMe Controllers 00:34:10.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.372 Controller IO queue size 128, less than required. 00:34:10.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.372 Controller IO queue size 128, less than required. 00:34:10.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:10.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:10.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:10.372 Initialization complete. Launching workers. 00:34:10.372 00:34:10.372 ==================== 00:34:10.372 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:34:10.372 TCP transport: 00:34:10.372 polls: 7890 00:34:10.372 idle_polls: 5604 00:34:10.372 sock_completions: 2286 00:34:10.372 nvme_completions: 4473 00:34:10.372 submitted_requests: 6680 00:34:10.372 queued_requests: 1 00:34:10.372 00:34:10.372 ==================== 00:34:10.372 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:34:10.372 TCP transport: 00:34:10.372 polls: 8051 00:34:10.372 idle_polls: 5524 00:34:10.372 sock_completions: 2527 00:34:10.372 nvme_completions: 4947 00:34:10.372 submitted_requests: 7402 00:34:10.372 queued_requests: 1 00:34:10.372 ======================================================== 00:34:10.372 Latency(us) 00:34:10.372 Device Information : IOPS MiB/s Average min max 00:34:10.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1117.17 279.29 117269.95 52572.55 207662.52 00:34:10.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1235.59 308.90 104749.87 64232.60 151362.36 00:34:10.372 ======================================================== 00:34:10.372 Total : 2352.76 588.19 110694.85 52572.55 207662.52 00:34:10.372 00:34:10.372 10:44:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:34:10.373 10:44:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.373 rmmod nvme_tcp 00:34:10.373 rmmod nvme_fabrics 00:34:10.373 rmmod nvme_keyring 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2187037 ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2187037 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2187037 ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2187037 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2187037 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2187037' 00:34:10.373 killing process with pid 2187037 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2187037 00:34:10.373 10:44:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2187037 00:34:10.373 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.374 10:44:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.374 00:34:10.374 real 0m24.113s 00:34:10.374 user 1m13.513s 00:34:10.374 sys 0m6.938s 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 ************************************ 00:34:10.374 END TEST nvmf_perf 00:34:10.374 ************************************ 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.374 ************************************ 00:34:10.374 START TEST nvmf_fio_host 00:34:10.374 ************************************ 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:10.374 * Looking for test storage... 00:34:10.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.374 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:34:10.375 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.376 --rc genhtml_branch_coverage=1 00:34:10.376 --rc genhtml_function_coverage=1 00:34:10.376 --rc genhtml_legend=1 00:34:10.376 --rc geninfo_all_blocks=1 00:34:10.376 --rc geninfo_unexecuted_blocks=1 00:34:10.376 00:34:10.376 ' 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.376 --rc genhtml_branch_coverage=1 00:34:10.376 --rc genhtml_function_coverage=1 00:34:10.376 --rc genhtml_legend=1 00:34:10.376 --rc geninfo_all_blocks=1 00:34:10.376 --rc geninfo_unexecuted_blocks=1 00:34:10.376 00:34:10.376 ' 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.376 --rc genhtml_branch_coverage=1 00:34:10.376 --rc genhtml_function_coverage=1 00:34:10.376 --rc genhtml_legend=1 00:34:10.376 --rc geninfo_all_blocks=1 00:34:10.376 --rc geninfo_unexecuted_blocks=1 00:34:10.376 00:34:10.376 ' 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.376 --rc genhtml_branch_coverage=1 00:34:10.376 --rc genhtml_function_coverage=1 00:34:10.376 --rc genhtml_legend=1 00:34:10.376 --rc geninfo_all_blocks=1 00:34:10.376 --rc geninfo_unexecuted_blocks=1 00:34:10.376 00:34:10.376 ' 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.376 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.377 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.378 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.379 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.379 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.379 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.380 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.381 10:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.381 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:10.382 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:10.382 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.382 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:10.383 Found net devices under 0000:84:00.0: cvl_0_0 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:10.383 Found net devices under 0000:84:00.1: cvl_0_1 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.383 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.384 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:34:10.384 00:34:10.384 --- 10.0.0.2 ping statistics --- 00:34:10.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.385 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:34:10.385 00:34:10.385 --- 10.0.0.1 ping statistics --- 00:34:10.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.385 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.385 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2191484 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2191484 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2191484 ']' 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.386 10:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.386 [2024-12-09 10:44:12.765244] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.386 [2024-12-09 10:44:12.765361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.386 [2024-12-09 10:44:12.864881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.386 [2024-12-09 10:44:12.940405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.386 [2024-12-09 10:44:12.940477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.386 [2024-12-09 10:44:12.940498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.386 [2024-12-09 10:44:12.940514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.387 [2024-12-09 10:44:12.940528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.387 [2024-12-09 10:44:12.942666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.387 [2024-12-09 10:44:12.942797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.387 [2024-12-09 10:44:12.942801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.387 [2024-12-09 10:44:12.942748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.387 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.387 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:34:10.388 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:10.388 [2024-12-09 10:44:13.545317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.388 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:10.388 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.388 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.388 10:44:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:10.388 Malloc1 00:34:10.388 10:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.388 10:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:10.388 10:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.388 [2024-12-09 10:44:15.075876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.388 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:10.389 10:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:10.389 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:10.389 fio-3.35 00:34:10.389 Starting 1 thread 00:34:10.389 00:34:10.389 test: (groupid=0, jobs=1): err= 0: pid=2191971: Mon Dec 9 10:44:18 2024 00:34:10.389 read: IOPS=8942, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2006msec) 00:34:10.389 slat (usec): min=2, max=279, avg= 3.23, stdev= 3.31 00:34:10.390 clat (usec): min=2518, max=13845, avg=7814.00, stdev=623.30 00:34:10.390 lat (usec): min=2545, max=13848, avg=7817.23, stdev=623.12 00:34:10.390 clat percentiles (usec): 00:34:10.390 | 1.00th=[ 6521], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:34:10.390 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:34:10.390 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:34:10.390 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[13566], 00:34:10.390 | 99.99th=[13829] 00:34:10.390 bw ( KiB/s): min=35016, max=36184, per=99.95%, avg=35752.00, stdev=540.02, samples=4 00:34:10.390 iops : min= 8754, max= 9046, avg=8938.00, stdev=135.01, samples=4 00:34:10.390 write: IOPS=8962, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2006msec); 0 zone resets 00:34:10.390 slat (usec): min=2, max=139, avg= 3.51, stdev= 2.35 00:34:10.390 clat (usec): min=1706, max=13112, avg=6431.69, stdev=529.54 00:34:10.390 lat (usec): min=1715, max=13115, avg=6435.21, stdev=529.45 00:34:10.390 clat percentiles (usec): 00:34:10.390 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:34:10.390 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:34:10.390 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7177], 00:34:10.390 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[10421], 99.95th=[11731], 00:34:10.390 | 99.99th=[13042] 00:34:10.390 bw ( KiB/s): min=35464, max=36112, per=99.94%, avg=35828.00, stdev=269.04, samples=4 00:34:10.390 iops : min= 8866, max= 9028, avg=8957.00, stdev=67.26, samples=4 00:34:10.390 lat (msec) : 2=0.02%, 4=0.11%, 10=99.70%, 20=0.16% 00:34:10.390 cpu : usr=63.84%, sys=30.42%, ctx=369, majf=0, minf=30 00:34:10.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:10.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:10.390 issued rwts: total=17939,17978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:10.390 00:34:10.390 Run status group 0 (all jobs): 00:34:10.390 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2006-2006msec 00:34:10.390 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2006-2006msec 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.391 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:10.392 10:44:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:10.392 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:10.392 fio-3.35 00:34:10.392 Starting 1 thread 00:34:10.392 00:34:10.392 test: (groupid=0, jobs=1): err= 0: pid=2192305: Mon Dec 9 10:44:20 2024 00:34:10.392 read: IOPS=5783, BW=90.4MiB/s (94.8MB/s)(182MiB/2014msec) 00:34:10.392 slat (usec): min=4, max=278, avg= 7.00, stdev= 4.70 00:34:10.392 clat (usec): min=1732, max=27836, avg=13062.17, stdev=4735.84 00:34:10.392 lat (usec): min=1737, max=27846, avg=13069.16, stdev=4737.46 00:34:10.392 clat percentiles (usec): 00:34:10.392 | 1.00th=[ 6128], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 8979], 00:34:10.392 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[13173], 00:34:10.392 | 70.00th=[14484], 80.00th=[16581], 90.00th=[20317], 95.00th=[23200], 00:34:10.392 | 99.00th=[25297], 99.50th=[25822], 99.90th=[27132], 99.95th=[27395], 00:34:10.392 | 99.99th=[27657] 00:34:10.392 bw ( KiB/s): min=42304, max=60128, per=52.10%, avg=48208.00, stdev=8071.97, samples=4 00:34:10.392 iops : min= 2644, max= 3758, avg=3013.00, stdev=504.50, samples=4 00:34:10.392 write: IOPS=3479, BW=54.4MiB/s (57.0MB/s)(98.3MiB/1809msec); 0 zone resets 00:34:10.392 slat (usec): min=43, max=196, avg=60.66, stdev=22.25 00:34:10.392 clat (usec): min=4212, max=32222, avg=15838.76, stdev=4890.72 00:34:10.392 lat (usec): min=4303, max=32314, avg=15899.42, stdev=4905.60 00:34:10.392 clat percentiles (usec): 00:34:10.392 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11207], 20.00th=[11994], 00:34:10.392 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[15270], 00:34:10.392 | 70.00th=[17171], 80.00th=[20579], 90.00th=[23987], 95.00th=[26084], 00:34:10.392 | 99.00th=[28443], 99.50th=[29492], 99.90th=[30540], 99.95th=[30540], 00:34:10.392 | 99.99th=[32113] 00:34:10.392 bw ( KiB/s): min=43168, max=64160, per=90.45%, avg=50352.00, stdev=9384.70, samples=4 00:34:10.393 iops : min= 2698, max= 4010, avg=3147.00, stdev=586.54, samples=4 00:34:10.393 lat (msec) : 2=0.02%, 4=0.03%, 10=20.24%, 20=65.37%, 50=14.35% 00:34:10.393 cpu : usr=81.77%, sys=16.10%, ctx=74, majf=0, minf=63 00:34:10.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:10.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:10.393 issued rwts: total=11648,6294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:10.393 00:34:10.393 Run status group 0 (all jobs): 00:34:10.393 READ: bw=90.4MiB/s (94.8MB/s), 90.4MiB/s-90.4MiB/s (94.8MB/s-94.8MB/s), io=182MiB (191MB), run=2014-2014msec 00:34:10.393 WRITE: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=98.3MiB (103MB), run=1809-1809msec 00:34:10.393 10:44:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.393 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.394 rmmod nvme_tcp 00:34:10.394 rmmod nvme_fabrics 00:34:10.394 rmmod nvme_keyring 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2191484 ']' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2191484 ']' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191484' 00:34:10.394 killing process with pid 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2191484 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.394 10:44:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.395 00:34:10.395 real 0m14.571s 00:34:10.395 user 0m41.336s 00:34:10.395 sys 0m5.100s 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.395 ************************************ 00:34:10.395 END TEST nvmf_fio_host 00:34:10.395 ************************************ 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.395 ************************************ 00:34:10.395 START TEST nvmf_failover 00:34:10.395 ************************************ 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:10.395 * Looking for test storage... 00:34:10.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:10.395 10:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.395 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:10.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.396 --rc genhtml_branch_coverage=1 00:34:10.396 --rc genhtml_function_coverage=1 00:34:10.396 --rc genhtml_legend=1 00:34:10.396 --rc geninfo_all_blocks=1 00:34:10.396 --rc geninfo_unexecuted_blocks=1 00:34:10.396 00:34:10.396 ' 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:10.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.396 --rc genhtml_branch_coverage=1 00:34:10.396 --rc genhtml_function_coverage=1 00:34:10.396 --rc genhtml_legend=1 00:34:10.396 --rc geninfo_all_blocks=1 00:34:10.396 --rc geninfo_unexecuted_blocks=1 00:34:10.396 00:34:10.396 ' 00:34:10.396 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:10.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.396 --rc genhtml_branch_coverage=1 00:34:10.396 --rc genhtml_function_coverage=1 00:34:10.396 --rc genhtml_legend=1 00:34:10.396 --rc geninfo_all_blocks=1 00:34:10.397 --rc geninfo_unexecuted_blocks=1 00:34:10.397 00:34:10.397 ' 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:10.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.397 --rc genhtml_branch_coverage=1 00:34:10.397 --rc genhtml_function_coverage=1 00:34:10.397 --rc genhtml_legend=1 00:34:10.397 --rc geninfo_all_blocks=1 00:34:10.397 --rc geninfo_unexecuted_blocks=1 00:34:10.397 00:34:10.397 ' 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.397 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.398 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.399 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:10.400 10:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.400 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.401 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:10.402 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:10.402 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.402 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:10.403 Found net devices under 0000:84:00.0: cvl_0_0 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:10.403 Found net devices under 0000:84:00.1: cvl_0_1 00:34:10.403 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.404 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:34:10.405 00:34:10.405 --- 10.0.0.2 ping statistics --- 00:34:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.405 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:10.405 00:34:10.405 --- 10.0.0.1 ping statistics --- 00:34:10.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.405 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.405 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2194712 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2194712 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2194712 ']' 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.406 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.406 [2024-12-09 10:44:27.495469] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.406 [2024-12-09 10:44:27.495646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.406 [2024-12-09 10:44:27.674553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:10.406 [2024-12-09 10:44:27.795973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.406 [2024-12-09 10:44:27.796088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.406 [2024-12-09 10:44:27.796125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.406 [2024-12-09 10:44:27.796154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.406 [2024-12-09 10:44:27.796180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.406 [2024-12-09 10:44:27.798922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.407 [2024-12-09 10:44:27.799000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.407 [2024-12-09 10:44:27.799005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.407 10:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:10.407 [2024-12-09 10:44:28.355211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.407 10:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:10.407 Malloc0 00:34:10.407 10:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:10.407 10:44:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:10.407 10:44:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.407 [2024-12-09 10:44:30.687863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.407 10:44:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.407 [2024-12-09 10:44:31.221648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:10.407 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:10.407 [2024-12-09 10:44:31.911649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2195200 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2195200 /var/tmp/bdevperf.sock 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2195200 ']' 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.408 10:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.408 10:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.408 10:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:10.408 10:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.408 NVMe0n1 00:34:10.408 10:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.408 00:34:10.408 10:44:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2195452 00:34:10.408 10:44:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:10.408 10:44:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:10.409 10:44:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.409 10:44:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:10.409 10:44:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.409 00:34:10.409 10:44:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.409 [2024-12-09 10:44:39.195576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.409 [2024-12-09 10:44:39.195848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.195994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.196006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.196017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.196037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.196049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.410 [2024-12-09 10:44:39.196060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 [2024-12-09 10:44:39.196280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebac00 is same with the state(6) to be set 00:34:10.411 10:44:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:10.412 10:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.412 [2024-12-09 10:44:42.735943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.412 10:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:10.412 10:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:10.412 [2024-12-09 10:44:44.152520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.412 [2024-12-09 10:44:44.152618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.412 [2024-12-09 10:44:44.152647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.412 [2024-12-09 10:44:44.152660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.412 [2024-12-09 10:44:44.152673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.412 [2024-12-09 10:44:44.152684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 [2024-12-09 10:44:44.152838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10061e0 is same with the state(6) to be set 00:34:10.413 10:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2195452 00:34:10.413 { 00:34:10.413 "results": [ 00:34:10.413 { 00:34:10.413 "job": "NVMe0n1", 00:34:10.413 "core_mask": "0x1", 00:34:10.413 "workload": "verify", 00:34:10.413 "status": "finished", 00:34:10.413 "verify_range": { 00:34:10.413 "start": 0, 00:34:10.413 "length": 16384 00:34:10.413 }, 00:34:10.413 "queue_depth": 128, 00:34:10.415 "io_size": 4096, 00:34:10.415 "runtime": 15.009676, 00:34:10.415 "iops": 8586.860902260649, 00:34:10.415 "mibps": 33.54242539945566, 00:34:10.415 "io_failed": 11077, 00:34:10.415 "io_timeout": 0, 00:34:10.415 "avg_latency_us": 13700.487992461502, 00:34:10.415 "min_latency_us": 540.0651851851852, 00:34:10.415 "max_latency_us": 16117.001481481482 00:34:10.415 } 00:34:10.415 ], 00:34:10.415 "core_count": 1 00:34:10.415 } 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2195200 ']' 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195200' 00:34:10.415 killing process with pid 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2195200 00:34:10.415 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:10.415 [2024-12-09 10:44:31.988210] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:10.415 [2024-12-09 10:44:31.988316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195200 ] 00:34:10.415 [2024-12-09 10:44:32.067422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.415 [2024-12-09 10:44:32.127381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.415 Running I/O for 15 seconds... 00:34:10.415 8537.00 IOPS, 33.35 MiB/s [2024-12-09T09:44:55.069Z] [2024-12-09 10:44:34.842235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.416 [2024-12-09 10:44:34.842588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.416 [2024-12-09 10:44:34.842603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.417 [2024-12-09 10:44:34.842879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.417 [2024-12-09 10:44:34.842897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.842912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.842928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.842943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.842959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.842990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.843020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.843051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.843123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.418 [2024-12-09 10:44:34.843153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.418 [2024-12-09 10:44:34.843167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.419 [2024-12-09 10:44:34.843410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.419 [2024-12-09 10:44:34.843425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.420 [2024-12-09 10:44:34.843440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.420 [2024-12-09 10:44:34.843470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.420 [2024-12-09 10:44:34.843505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.420 [2024-12-09 10:44:34.843536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.420 [2024-12-09 10:44:34.843566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.420 [2024-12-09 10:44:34.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.420 [2024-12-09 10:44:34.843627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.420 [2024-12-09 10:44:34.843643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.420 [2024-12-09 10:44:34.843657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.421 [2024-12-09 10:44:34.843901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.421 [2024-12-09 10:44:34.843916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.843933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.843948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.843963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.843978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.843994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.422 [2024-12-09 10:44:34.844161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.422 [2024-12-09 10:44:34.844177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.423 [2024-12-09 10:44:34.844318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.423 [2024-12-09 10:44:34.844425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.423 [2024-12-09 10:44:34.844439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.428 [2024-12-09 10:44:34.844454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.428 [2024-12-09 10:44:34.844469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.429 [2024-12-09 10:44:34.844746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.429 [2024-12-09 10:44:34.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.844971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.844987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.845001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.845017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.430 [2024-12-09 10:44:34.845031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.430 [2024-12-09 10:44:34.845047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.431 [2024-12-09 10:44:34.845351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.431 [2024-12-09 10:44:34.845365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.432 [2024-12-09 10:44:34.845578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.432 [2024-12-09 10:44:34.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.432 [2024-12-09 10:44:34.845696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.845980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.845995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.846009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.433 [2024-12-09 10:44:34.846025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.433 [2024-12-09 10:44:34.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-12-09 10:44:34.846285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.434 [2024-12-09 10:44:34.846300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2486070 is same with the state(6) to be set 00:34:10.434 [2024-12-09 10:44:34.846318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.434 [2024-12-09 10:44:34.846330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.434 [2024-12-09 10:44:34.846343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:34:10.435 [2024-12-09 10:44:34.846356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:34.846426] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:10.435 [2024-12-09 10:44:34.846468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.435 [2024-12-09 10:44:34.846487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:34.846503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.435 [2024-12-09 10:44:34.846517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:34.846531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.435 [2024-12-09 10:44:34.846545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:34.846559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.435 [2024-12-09 10:44:34.846573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:34.846587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:10.435 [2024-12-09 10:44:34.849898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:10.435 [2024-12-09 10:44:34.849938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461820 (9): Bad file descriptor 00:34:10.435 [2024-12-09 10:44:34.887272] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:10.435 8479.50 IOPS, 33.12 MiB/s [2024-12-09T09:44:55.089Z] 8630.67 IOPS, 33.71 MiB/s [2024-12-09T09:44:55.089Z] 8648.00 IOPS, 33.78 MiB/s [2024-12-09T09:44:55.089Z] 8675.80 IOPS, 33.89 MiB/s [2024-12-09T09:44:55.089Z] [2024-12-09 10:44:39.197270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-12-09 10:44:39.197319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:39.197347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-12-09 10:44:39.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.435 [2024-12-09 10:44:39.197389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-12-09 10:44:39.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-12-09 10:44:39.197703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.436 [2024-12-09 10:44:39.197719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.197976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.437 [2024-12-09 10:44:39.197990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.198006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.437 [2024-12-09 10:44:39.198020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.437 [2024-12-09 10:44:39.198035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.437 [2024-12-09 10:44:39.198049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.438 [2024-12-09 10:44:39.198284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.438 [2024-12-09 10:44:39.198300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.439 [2024-12-09 10:44:39.198330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.439 [2024-12-09 10:44:39.198359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.439 [2024-12-09 10:44:39.198388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.439 [2024-12-09 10:44:39.198418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.439 [2024-12-09 10:44:39.198446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.439 [2024-12-09 10:44:39.198460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.440 [2024-12-09 10:44:39.198637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.440 [2024-12-09 10:44:39.198738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.440 [2024-12-09 10:44:39.198756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.441 [2024-12-09 10:44:39.198966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.441 [2024-12-09 10:44:39.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.198996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.442 [2024-12-09 10:44:39.199379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.442 [2024-12-09 10:44:39.199393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.443 [2024-12-09 10:44:39.199595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.443 [2024-12-09 10:44:39.199644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7912 len:8 PRP1 0x0 PRP2 0x0 00:34:10.443 [2024-12-09 10:44:39.199658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.443 [2024-12-09 10:44:39.199694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.443 [2024-12-09 10:44:39.199706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7920 len:8 PRP1 0x0 PRP2 0x0 00:34:10.443 [2024-12-09 10:44:39.199718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.443 [2024-12-09 10:44:39.199740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.443 [2024-12-09 10:44:39.199752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.443 [2024-12-09 10:44:39.199764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7928 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.199777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.199790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.444 [2024-12-09 10:44:39.199801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.444 [2024-12-09 10:44:39.199812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.199824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.199837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.444 [2024-12-09 10:44:39.199848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.444 [2024-12-09 10:44:39.199859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7944 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.199872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.199884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.444 [2024-12-09 10:44:39.199895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.444 [2024-12-09 10:44:39.199906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7952 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.199919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.199933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.444 [2024-12-09 10:44:39.199943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.444 [2024-12-09 10:44:39.199954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7960 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.199967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.199980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.444 [2024-12-09 10:44:39.199991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.444 [2024-12-09 10:44:39.200002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:8 PRP1 0x0 PRP2 0x0 00:34:10.444 [2024-12-09 10:44:39.200014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.444 [2024-12-09 10:44:39.200028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.448 [2024-12-09 10:44:39.200038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.448 [2024-12-09 10:44:39.200049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7976 len:8 PRP1 0x0 PRP2 0x0 00:34:10.448 [2024-12-09 10:44:39.200065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.448 [2024-12-09 10:44:39.200079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.448 [2024-12-09 10:44:39.200090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.448 [2024-12-09 10:44:39.200101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7984 len:8 PRP1 0x0 PRP2 0x0 00:34:10.448 [2024-12-09 10:44:39.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.448 [2024-12-09 10:44:39.200127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.448 [2024-12-09 10:44:39.200137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.448 [2024-12-09 10:44:39.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7992 len:8 PRP1 0x0 PRP2 0x0 00:34:10.448 [2024-12-09 10:44:39.200169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.448 [2024-12-09 10:44:39.200183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.448 [2024-12-09 10:44:39.200193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.448 [2024-12-09 10:44:39.200204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:8 PRP1 0x0 PRP2 0x0 00:34:10.448 [2024-12-09 10:44:39.200217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8008 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8016 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8024 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8040 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.449 [2024-12-09 10:44:39.200525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8048 len:8 PRP1 0x0 PRP2 0x0 00:34:10.449 [2024-12-09 10:44:39.200538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.449 [2024-12-09 10:44:39.200552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.449 [2024-12-09 10:44:39.200564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.450 [2024-12-09 10:44:39.200581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8056 len:8 PRP1 0x0 PRP2 0x0 00:34:10.450 [2024-12-09 10:44:39.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.450 [2024-12-09 10:44:39.200610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.450 [2024-12-09 10:44:39.200622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.450 [2024-12-09 10:44:39.200635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:8 PRP1 0x0 PRP2 0x0 00:34:10.450 [2024-12-09 10:44:39.200649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.450 [2024-12-09 10:44:39.200664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.450 [2024-12-09 10:44:39.200677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.450 [2024-12-09 10:44:39.200689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8072 len:8 PRP1 0x0 PRP2 0x0 00:34:10.450 [2024-12-09 10:44:39.200703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.450 [2024-12-09 10:44:39.200717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.450 [2024-12-09 10:44:39.200736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.450 [2024-12-09 10:44:39.200749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8080 len:8 PRP1 0x0 PRP2 0x0 00:34:10.450 [2024-12-09 10:44:39.200773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.454 [2024-12-09 10:44:39.200786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.454 [2024-12-09 10:44:39.200798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.454 [2024-12-09 10:44:39.200816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8088 len:8 PRP1 0x0 PRP2 0x0 00:34:10.454 [2024-12-09 10:44:39.200830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.454 [2024-12-09 10:44:39.200844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.454 [2024-12-09 10:44:39.200855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.454 [2024-12-09 10:44:39.200866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:8 PRP1 0x0 PRP2 0x0 00:34:10.454 [2024-12-09 10:44:39.200879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.454 [2024-12-09 10:44:39.200896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.454 [2024-12-09 10:44:39.200908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.454 [2024-12-09 10:44:39.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8104 len:8 PRP1 0x0 PRP2 0x0 00:34:10.454 [2024-12-09 10:44:39.200933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.454 [2024-12-09 10:44:39.200947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.454 [2024-12-09 10:44:39.200957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.454 [2024-12-09 10:44:39.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8112 len:8 PRP1 0x0 PRP2 0x0 00:34:10.454 [2024-12-09 10:44:39.200982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.454 [2024-12-09 10:44:39.200996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.454 [2024-12-09 10:44:39.201007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.454 [2024-12-09 10:44:39.201019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8120 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8136 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8144 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8152 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.455 [2024-12-09 10:44:39.201329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8168 len:8 PRP1 0x0 PRP2 0x0 00:34:10.455 [2024-12-09 10:44:39.201342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.455 [2024-12-09 10:44:39.201355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.455 [2024-12-09 10:44:39.201366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8176 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8184 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8200 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8208 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8216 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8232 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.456 [2024-12-09 10:44:39.201766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.456 [2024-12-09 10:44:39.201777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.456 [2024-12-09 10:44:39.201789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8240 len:8 PRP1 0x0 PRP2 0x0 00:34:10.456 [2024-12-09 10:44:39.201810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.201823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.201835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.201847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8248 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.201861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.201876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.201888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.201900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.201913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.201927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.201938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.201949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8264 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.201962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.201975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.201986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.201998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7432 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.202010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.202024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.202035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.202063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7440 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.202090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.202105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.202116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7448 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.202130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.202143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.202154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.457 [2024-12-09 10:44:39.202165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:8 PRP1 0x0 PRP2 0x0 00:34:10.457 [2024-12-09 10:44:39.202178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.457 [2024-12-09 10:44:39.202191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.457 [2024-12-09 10:44:39.202203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.458 [2024-12-09 10:44:39.202214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7464 len:8 PRP1 0x0 PRP2 0x0 00:34:10.458 [2024-12-09 10:44:39.202226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.458 [2024-12-09 10:44:39.202250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.458 [2024-12-09 10:44:39.202262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7472 len:8 PRP1 0x0 PRP2 0x0 00:34:10.458 [2024-12-09 10:44:39.202275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.458 [2024-12-09 10:44:39.202299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.458 [2024-12-09 10:44:39.202310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7480 len:8 PRP1 0x0 PRP2 0x0 00:34:10.458 [2024-12-09 10:44:39.202323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202394] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:10.458 [2024-12-09 10:44:39.202435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.458 [2024-12-09 10:44:39.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.458 [2024-12-09 10:44:39.202485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.458 [2024-12-09 10:44:39.202513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.458 [2024-12-09 10:44:39.202540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:39.202555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:10.458 [2024-12-09 10:44:39.202611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461820 (9): Bad file descriptor 00:34:10.458 [2024-12-09 10:44:39.205890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:10.458 [2024-12-09 10:44:39.363889] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:10.458 8442.33 IOPS, 32.98 MiB/s [2024-12-09T09:44:55.112Z] 8488.43 IOPS, 33.16 MiB/s [2024-12-09T09:44:55.112Z] 8541.62 IOPS, 33.37 MiB/s [2024-12-09T09:44:55.112Z] 8572.11 IOPS, 33.48 MiB/s [2024-12-09T09:44:55.112Z] 8594.50 IOPS, 33.57 MiB/s [2024-12-09T09:44:55.112Z] [2024-12-09 10:44:44.153135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.458 [2024-12-09 10:44:44.153190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:44.153219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.458 [2024-12-09 10:44:44.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.458 [2024-12-09 10:44:44.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.458 [2024-12-09 10:44:44.153267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.459 [2024-12-09 10:44:44.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.459 [2024-12-09 10:44:44.153694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.153973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.153987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.154031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.154046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.154060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.154075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.154089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.460 [2024-12-09 10:44:44.154104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.460 [2024-12-09 10:44:44.154118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.461 [2024-12-09 10:44:44.154554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.461 [2024-12-09 10:44:44.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.462 [2024-12-09 10:44:44.154957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.154973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.462 [2024-12-09 10:44:44.154987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.462 [2024-12-09 10:44:44.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.462 [2024-12-09 10:44:44.155018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.463 [2024-12-09 10:44:44.155446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.463 [2024-12-09 10:44:44.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.464 [2024-12-09 10:44:44.155958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.464 [2024-12-09 10:44:44.155973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.155989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.465 [2024-12-09 10:44:44.156228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.465 [2024-12-09 10:44:44.156247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.466 [2024-12-09 10:44:44.156541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.466 [2024-12-09 10:44:44.156555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.467 [2024-12-09 10:44:44.156816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.467 [2024-12-09 10:44:44.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.467 [2024-12-09 10:44:44.156863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.467 [2024-12-09 10:44:44.156877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.156894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.156908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.156924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.156940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.156956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.156970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.156986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.157001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.157017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.157031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.157054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.157070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.157086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.468 [2024-12-09 10:44:44.157101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.157150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.468 [2024-12-09 10:44:44.157168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.468 [2024-12-09 10:44:44.157181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3640 len:8 PRP1 0x0 PRP2 0x0 00:34:10.468 [2024-12-09 10:44:44.157196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.468 [2024-12-09 10:44:44.157270] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:10.468 [2024-12-09 10:44:44.157314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.468 [2024-12-09 10:44:44.157333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.469 [2024-12-09 10:44:44.157350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.469 [2024-12-09 10:44:44.157373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.469 [2024-12-09 10:44:44.157400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.469 [2024-12-09 10:44:44.157414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.469 [2024-12-09 10:44:44.157428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.469 [2024-12-09 10:44:44.157442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.469 [2024-12-09 10:44:44.157456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:10.469 [2024-12-09 10:44:44.157496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2461820 (9): Bad file descriptor 00:34:10.469 [2024-12-09 10:44:44.160779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:10.469 [2024-12-09 10:44:44.228813] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:10.469 8547.00 IOPS, 33.39 MiB/s [2024-12-09T09:44:55.123Z] 8559.08 IOPS, 33.43 MiB/s [2024-12-09T09:44:55.123Z] 8573.69 IOPS, 33.49 MiB/s [2024-12-09T09:44:55.123Z] 8588.64 IOPS, 33.55 MiB/s 00:34:10.469 Latency(us) 00:34:10.469 [2024-12-09T09:44:55.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.469 Verification LBA range: start 0x0 length 0x4000 00:34:10.469 NVMe0n1 : 15.01 8586.86 33.54 737.99 0.00 13700.49 540.07 16117.00 00:34:10.469 [2024-12-09T09:44:55.123Z] =================================================================================================================== 00:34:10.469 [2024-12-09T09:44:55.123Z] Total : 8586.86 33.54 737.99 0.00 13700.49 540.07 16117.00 00:34:10.469 Received shutdown signal, test time was about 15.000000 seconds 00:34:10.469 00:34:10.469 Latency(us) 00:34:10.469 [2024-12-09T09:44:55.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.469 [2024-12-09T09:44:55.123Z] =================================================================================================================== 00:34:10.469 [2024-12-09T09:44:55.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.469 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:10.469 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2197162 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2197162 /var/tmp/bdevperf.sock 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2197162 ']' 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.470 10:44:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:10.470 10:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.470 10:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:10.470 10:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:10.470 [2024-12-09 10:44:50.057799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:10.470 10:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:10.470 [2024-12-09 10:44:50.390599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:10.470 10:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.470 NVMe0n1 00:34:10.470 10:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.470 00:34:10.470 10:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:10.470 00:34:10.470 10:44:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:10.470 10:44:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:10.470 10:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:10.470 10:44:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:12.398 10:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:12.398 10:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:12.659 10:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2198088 00:34:12.659 10:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:12.659 10:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2198088 00:34:14.046 { 00:34:14.046 "results": [ 00:34:14.046 { 00:34:14.046 "job": "NVMe0n1", 00:34:14.046 "core_mask": "0x1", 00:34:14.046 "workload": "verify", 00:34:14.046 "status": "finished", 00:34:14.046 "verify_range": { 00:34:14.046 "start": 0, 00:34:14.046 "length": 16384 00:34:14.046 }, 00:34:14.046 "queue_depth": 128, 00:34:14.046 "io_size": 4096, 00:34:14.046 "runtime": 1.05226, 00:34:14.046 "iops": 8338.243399920171, 00:34:14.046 "mibps": 32.57126328093817, 00:34:14.046 "io_failed": 0, 00:34:14.046 "io_timeout": 0, 00:34:14.046 "avg_latency_us": 14719.022977315133, 00:34:14.046 "min_latency_us": 3252.5274074074073, 00:34:14.046 "max_latency_us": 46991.73925925926 00:34:14.046 } 00:34:14.046 ], 00:34:14.046 "core_count": 1 00:34:14.046 } 00:34:14.047 10:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:14.047 [2024-12-09 10:44:49.036095] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:14.047 [2024-12-09 10:44:49.036285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197162 ] 00:34:14.047 [2024-12-09 10:44:49.151155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.047 [2024-12-09 10:44:49.209069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.047 [2024-12-09 10:44:53.790942] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:14.047 [2024-12-09 10:44:53.791047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.047 [2024-12-09 10:44:53.791073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.047 [2024-12-09 10:44:53.791090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.047 [2024-12-09 10:44:53.791104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.047 [2024-12-09 10:44:53.791119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.047 [2024-12-09 10:44:53.791132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.047 [2024-12-09 10:44:53.791146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.047 [2024-12-09 10:44:53.791159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.047 [2024-12-09 10:44:53.791173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:14.047 [2024-12-09 10:44:53.791220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:14.047 [2024-12-09 10:44:53.791253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c8820 (9): Bad file descriptor 00:34:14.047 [2024-12-09 10:44:53.802078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:14.047 Running I/O for 1 seconds... 00:34:14.047 8646.00 IOPS, 33.77 MiB/s 00:34:14.047 Latency(us) 00:34:14.047 [2024-12-09T09:44:58.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.047 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:14.047 Verification LBA range: start 0x0 length 0x4000 00:34:14.047 NVMe0n1 : 1.05 8338.24 32.57 0.00 0.00 14719.02 3252.53 46991.74 00:34:14.047 [2024-12-09T09:44:58.701Z] =================================================================================================================== 00:34:14.047 [2024-12-09T09:44:58.701Z] Total : 8338.24 32.57 0.00 0.00 14719.02 3252.53 46991.74 00:34:14.047 10:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:14.047 10:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:14.308 10:44:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:14.892 10:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:14.892 10:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:15.155 10:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:15.724 10:45:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2197162 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2197162 ']' 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2197162 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2197162 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2197162' 00:34:19.028 killing process with pid 2197162 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2197162 00:34:19.028 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2197162 00:34:19.288 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:19.288 10:45:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.864 rmmod nvme_tcp 00:34:19.864 rmmod nvme_fabrics 00:34:19.864 rmmod nvme_keyring 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2194712 ']' 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2194712 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2194712 ']' 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2194712 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2194712 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2194712' 00:34:19.864 killing process with pid 2194712 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2194712 00:34:19.864 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2194712 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.127 10:45:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.679 00:34:22.679 real 0m42.980s 00:34:22.679 user 2m33.375s 00:34:22.679 sys 0m8.223s 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:22.679 ************************************ 00:34:22.679 END TEST nvmf_failover 00:34:22.679 ************************************ 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.679 ************************************ 00:34:22.679 START TEST nvmf_host_discovery 00:34:22.679 ************************************ 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:22.679 * Looking for test storage... 00:34:22.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.679 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.679 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.680 --rc genhtml_branch_coverage=1 00:34:22.680 --rc genhtml_function_coverage=1 00:34:22.680 --rc genhtml_legend=1 00:34:22.680 --rc geninfo_all_blocks=1 00:34:22.680 --rc geninfo_unexecuted_blocks=1 00:34:22.680 00:34:22.680 ' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.680 --rc genhtml_branch_coverage=1 00:34:22.680 --rc genhtml_function_coverage=1 00:34:22.680 --rc genhtml_legend=1 00:34:22.680 --rc geninfo_all_blocks=1 00:34:22.680 --rc geninfo_unexecuted_blocks=1 00:34:22.680 00:34:22.680 ' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.680 --rc genhtml_branch_coverage=1 00:34:22.680 --rc genhtml_function_coverage=1 00:34:22.680 --rc genhtml_legend=1 00:34:22.680 --rc geninfo_all_blocks=1 00:34:22.680 --rc geninfo_unexecuted_blocks=1 00:34:22.680 00:34:22.680 ' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.680 --rc genhtml_branch_coverage=1 00:34:22.680 --rc genhtml_function_coverage=1 00:34:22.680 --rc genhtml_legend=1 00:34:22.680 --rc geninfo_all_blocks=1 00:34:22.680 --rc geninfo_unexecuted_blocks=1 00:34:22.680 00:34:22.680 ' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:22.680 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.681 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:25.990 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:25.990 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:25.990 Found net devices under 0000:84:00.0: cvl_0_0 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:25.990 Found net devices under 0000:84:00.1: cvl_0_1 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:25.990 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:25.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:34:25.991 00:34:25.991 --- 10.0.0.2 ping statistics --- 00:34:25.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.991 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:34:25.991 00:34:25.991 --- 10.0.0.1 ping statistics --- 00:34:25.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.991 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2201572 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2201572 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2201572 ']' 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.991 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.991 [2024-12-09 10:45:10.557295] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:25.991 [2024-12-09 10:45:10.557467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.260 [2024-12-09 10:45:10.738714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.260 [2024-12-09 10:45:10.853822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.260 [2024-12-09 10:45:10.853886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.260 [2024-12-09 10:45:10.853905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.260 [2024-12-09 10:45:10.853921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.260 [2024-12-09 10:45:10.853934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.260 [2024-12-09 10:45:10.855274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 [2024-12-09 10:45:11.728884] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 [2024-12-09 10:45:11.741329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 null0 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 null1 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2201748 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2201748 /tmp/host.sock 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2201748 ']' 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:27.206 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.206 10:45:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.468 [2024-12-09 10:45:11.883545] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:27.468 [2024-12-09 10:45:11.883712] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201748 ] 00:34:27.468 [2024-12-09 10:45:12.056059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.730 [2024-12-09 10:45:12.173458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:27.993 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.256 [2024-12-09 10:45:12.837181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.256 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.257 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:28.518 10:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.518 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.519 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.519 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.519 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.519 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:28.519 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:29.091 [2024-12-09 10:45:13.551065] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:29.091 [2024-12-09 10:45:13.551123] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:29.091 [2024-12-09 10:45:13.551180] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:29.091 [2024-12-09 10:45:13.637501] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:29.091 [2024-12-09 10:45:13.696784] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:29.091 [2024-12-09 10:45:13.698912] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7ad0d0:1 started. 00:34:29.091 [2024-12-09 10:45:13.703133] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:29.091 [2024-12-09 10:45:13.703187] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:29.350 [2024-12-09 10:45:13.749118] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7ad0d0 was disconnected and freed. delete nvme_qpair. 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.610 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.611 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:29.871 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.872 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:30.133 [2024-12-09 10:45:14.574939] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7ad4a0:1 started. 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:30.133 [2024-12-09 10:45:14.622998] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7ad4a0 was disconnected and freed. delete nvme_qpair. 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.133 [2024-12-09 10:45:14.719549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:30.133 [2024-12-09 10:45:14.720610] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:30.133 [2024-12-09 10:45:14.720686] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.133 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:30.134 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.394 [2024-12-09 10:45:14.809671] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:30.394 10:45:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:30.653 [2024-12-09 10:45:15.075938] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:30.653 [2024-12-09 10:45:15.076060] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:30.653 [2024-12-09 10:45:15.076100] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:30.653 [2024-12-09 10:45:15.076123] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.598 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.598 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:31.598 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:31.598 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:31.598 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.598 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.599 [2024-12-09 10:45:16.008330] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:31.599 [2024-12-09 10:45:16.008406] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:31.599 [2024-12-09 10:45:16.012481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.599 [2024-12-09 10:45:16.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.599 [2024-12-09 10:45:16.012599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.599 [2024-12-09 10:45:16.012636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.599 [2024-12-09 10:45:16.012671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.599 [2024-12-09 10:45:16.012707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.599 [2024-12-09 10:45:16.012804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.599 [2024-12-09 10:45:16.012821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.599 [2024-12-09 10:45:16.012837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:31.599 [2024-12-09 10:45:16.022461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.599 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.599 [2024-12-09 10:45:16.032529] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.599 [2024-12-09 10:45:16.032586] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.599 [2024-12-09 10:45:16.032621] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.032646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.599 [2024-12-09 10:45:16.032742] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.032912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.599 [2024-12-09 10:45:16.032945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.599 [2024-12-09 10:45:16.032964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.599 [2024-12-09 10:45:16.033017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.599 [2024-12-09 10:45:16.033073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.599 [2024-12-09 10:45:16.033111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.599 [2024-12-09 10:45:16.033148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.599 [2024-12-09 10:45:16.033181] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.599 [2024-12-09 10:45:16.033206] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.599 [2024-12-09 10:45:16.033239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.599 [2024-12-09 10:45:16.042778] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.599 [2024-12-09 10:45:16.042809] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.599 [2024-12-09 10:45:16.042832] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.042842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.599 [2024-12-09 10:45:16.042872] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.042982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.599 [2024-12-09 10:45:16.043014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.599 [2024-12-09 10:45:16.043032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.599 [2024-12-09 10:45:16.043103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.599 [2024-12-09 10:45:16.043157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.599 [2024-12-09 10:45:16.043192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.599 [2024-12-09 10:45:16.043226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.599 [2024-12-09 10:45:16.043257] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.599 [2024-12-09 10:45:16.043281] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.599 [2024-12-09 10:45:16.043301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.599 [2024-12-09 10:45:16.052916] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.599 [2024-12-09 10:45:16.052943] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.599 [2024-12-09 10:45:16.052954] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.052964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.599 [2024-12-09 10:45:16.052994] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.053279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.599 [2024-12-09 10:45:16.053349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.599 [2024-12-09 10:45:16.053392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.599 [2024-12-09 10:45:16.053449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.599 [2024-12-09 10:45:16.053532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.599 [2024-12-09 10:45:16.053585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.599 [2024-12-09 10:45:16.053621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.599 [2024-12-09 10:45:16.053653] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.599 [2024-12-09 10:45:16.053689] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.599 [2024-12-09 10:45:16.053711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.599 [2024-12-09 10:45:16.063047] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.599 [2024-12-09 10:45:16.063102] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.599 [2024-12-09 10:45:16.063126] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.063147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.599 [2024-12-09 10:45:16.063210] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.599 [2024-12-09 10:45:16.063526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.599 [2024-12-09 10:45:16.063602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.599 [2024-12-09 10:45:16.063645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.599 [2024-12-09 10:45:16.063702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.599 [2024-12-09 10:45:16.063816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.599 [2024-12-09 10:45:16.063862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.600 [2024-12-09 10:45:16.063897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.600 [2024-12-09 10:45:16.063928] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.600 [2024-12-09 10:45:16.063951] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.600 [2024-12-09 10:45:16.063970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.600 [2024-12-09 10:45:16.073256] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.600 [2024-12-09 10:45:16.073308] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.600 [2024-12-09 10:45:16.073332] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.073352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.600 [2024-12-09 10:45:16.073415] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.073654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.600 [2024-12-09 10:45:16.073741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.600 [2024-12-09 10:45:16.073788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.600 [2024-12-09 10:45:16.073845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.600 [2024-12-09 10:45:16.073928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.600 [2024-12-09 10:45:16.073972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.600 [2024-12-09 10:45:16.074007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.600 [2024-12-09 10:45:16.074061] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.600 [2024-12-09 10:45:16.074087] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.600 [2024-12-09 10:45:16.074107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.600 [2024-12-09 10:45:16.083463] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.600 [2024-12-09 10:45:16.083517] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.600 [2024-12-09 10:45:16.083540] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.083560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.600 [2024-12-09 10:45:16.083622] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.083880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.600 [2024-12-09 10:45:16.083950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.600 [2024-12-09 10:45:16.083991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.600 [2024-12-09 10:45:16.084048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.600 [2024-12-09 10:45:16.084136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.600 [2024-12-09 10:45:16.084181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.600 [2024-12-09 10:45:16.084216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.600 [2024-12-09 10:45:16.084247] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.600 [2024-12-09 10:45:16.084270] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.600 [2024-12-09 10:45:16.084289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.600 [2024-12-09 10:45:16.093672] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.600 [2024-12-09 10:45:16.093745] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.600 [2024-12-09 10:45:16.093775] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.093796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.600 [2024-12-09 10:45:16.093863] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.094085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.600 [2024-12-09 10:45:16.094154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.600 [2024-12-09 10:45:16.094196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.600 [2024-12-09 10:45:16.094253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.600 [2024-12-09 10:45:16.094368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.600 [2024-12-09 10:45:16.094431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.600 [2024-12-09 10:45:16.094468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.600 [2024-12-09 10:45:16.094501] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.600 [2024-12-09 10:45:16.094524] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.600 [2024-12-09 10:45:16.094544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.600 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.600 [2024-12-09 10:45:16.103912] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.600 [2024-12-09 10:45:16.103968] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.600 [2024-12-09 10:45:16.103992] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.104012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.600 [2024-12-09 10:45:16.104081] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.600 [2024-12-09 10:45:16.104363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.600 [2024-12-09 10:45:16.104432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.600 [2024-12-09 10:45:16.104474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.600 [2024-12-09 10:45:16.104530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.600 [2024-12-09 10:45:16.106612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.600 [2024-12-09 10:45:16.106671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.600 [2024-12-09 10:45:16.106708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.600 [2024-12-09 10:45:16.106760] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.600 [2024-12-09 10:45:16.106785] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.600 [2024-12-09 10:45:16.106818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.600 [2024-12-09 10:45:16.114131] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.600 [2024-12-09 10:45:16.114186] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.600 [2024-12-09 10:45:16.114210] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.114230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.601 [2024-12-09 10:45:16.114294] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.114591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.601 [2024-12-09 10:45:16.114659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.601 [2024-12-09 10:45:16.114701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.601 [2024-12-09 10:45:16.114785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.601 [2024-12-09 10:45:16.114823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.601 [2024-12-09 10:45:16.114842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.601 [2024-12-09 10:45:16.114858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.601 [2024-12-09 10:45:16.114872] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.601 [2024-12-09 10:45:16.114883] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.601 [2024-12-09 10:45:16.114891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.601 [2024-12-09 10:45:16.124344] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.601 [2024-12-09 10:45:16.124398] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.601 [2024-12-09 10:45:16.124422] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.124442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.601 [2024-12-09 10:45:16.124505] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.601 [2024-12-09 10:45:16.124804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.601 [2024-12-09 10:45:16.124822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.601 [2024-12-09 10:45:16.124847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.601 [2024-12-09 10:45:16.124896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.601 [2024-12-09 10:45:16.124918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.601 [2024-12-09 10:45:16.124934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.601 [2024-12-09 10:45:16.124948] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.601 [2024-12-09 10:45:16.124958] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.601 [2024-12-09 10:45:16.124973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.601 [2024-12-09 10:45:16.134554] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:31.601 [2024-12-09 10:45:16.134608] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:31.601 [2024-12-09 10:45:16.134632] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.134652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:31.601 [2024-12-09 10:45:16.134714] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:31.601 [2024-12-09 10:45:16.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.601 [2024-12-09 10:45:16.134912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d710 with addr=10.0.0.2, port=4420 00:34:31.601 [2024-12-09 10:45:16.134930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d710 is same with the state(6) to be set 00:34:31.601 [2024-12-09 10:45:16.134955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d710 (9): Bad file descriptor 00:34:31.601 [2024-12-09 10:45:16.135007] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:31.601 [2024-12-09 10:45:16.135036] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:31.601 [2024-12-09 10:45:16.135073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:31.601 [2024-12-09 10:45:16.135095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:31.601 [2024-12-09 10:45:16.135111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:31.601 [2024-12-09 10:45:16.135125] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:31.601 [2024-12-09 10:45:16.135135] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:31.601 [2024-12-09 10:45:16.135144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:31.601 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.863 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:31.863 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.127 10:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.074 [2024-12-09 10:45:17.571290] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:33.074 [2024-12-09 10:45:17.571346] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:33.074 [2024-12-09 10:45:17.571399] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:33.074 [2024-12-09 10:45:17.700914] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:33.337 [2024-12-09 10:45:17.802338] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:33.337 [2024-12-09 10:45:17.803840] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x779530:1 started. 00:34:33.337 [2024-12-09 10:45:17.808455] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:33.337 [2024-12-09 10:45:17.808534] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.337 [2024-12-09 10:45:17.810999] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x779530 was disconnected and freed. delete nvme_qpair. 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.337 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.338 request: 00:34:33.338 { 00:34:33.338 "name": "nvme", 00:34:33.338 "trtype": "tcp", 00:34:33.338 "traddr": "10.0.0.2", 00:34:33.338 "adrfam": "ipv4", 00:34:33.338 "trsvcid": "8009", 00:34:33.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:33.338 "wait_for_attach": true, 00:34:33.338 "method": "bdev_nvme_start_discovery", 00:34:33.338 "req_id": 1 00:34:33.338 } 00:34:33.338 Got JSON-RPC error response 00:34:33.338 response: 00:34:33.338 { 00:34:33.338 "code": -17, 00:34:33.338 "message": "File exists" 00:34:33.338 } 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.338 request: 00:34:33.338 { 00:34:33.338 "name": "nvme_second", 00:34:33.338 "trtype": "tcp", 00:34:33.338 "traddr": "10.0.0.2", 00:34:33.338 "adrfam": "ipv4", 00:34:33.338 "trsvcid": "8009", 00:34:33.338 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:33.338 "wait_for_attach": true, 00:34:33.338 "method": "bdev_nvme_start_discovery", 00:34:33.338 "req_id": 1 00:34:33.338 } 00:34:33.338 Got JSON-RPC error response 00:34:33.338 response: 00:34:33.338 { 00:34:33.338 "code": -17, 00:34:33.338 "message": "File exists" 00:34:33.338 } 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:33.338 10:45:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.600 10:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:34.546 [2024-12-09 10:45:19.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.546 [2024-12-09 10:45:19.104596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7918e0 with addr=10.0.0.2, port=8010 00:34:34.546 [2024-12-09 10:45:19.104634] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:34.546 [2024-12-09 10:45:19.104654] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:34.546 [2024-12-09 10:45:19.104670] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:35.492 [2024-12-09 10:45:20.107026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.492 [2024-12-09 10:45:20.107158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7918e0 with addr=10.0.0.2, port=8010 00:34:35.492 [2024-12-09 10:45:20.107228] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:35.492 [2024-12-09 10:45:20.107265] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:35.492 [2024-12-09 10:45:20.107317] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:36.882 [2024-12-09 10:45:21.109035] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:36.882 request: 00:34:36.882 { 00:34:36.882 "name": "nvme_second", 00:34:36.882 "trtype": "tcp", 00:34:36.882 "traddr": "10.0.0.2", 00:34:36.882 "adrfam": "ipv4", 00:34:36.882 "trsvcid": "8010", 00:34:36.882 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:36.882 "wait_for_attach": false, 00:34:36.882 "attach_timeout_ms": 3000, 00:34:36.882 "method": "bdev_nvme_start_discovery", 00:34:36.882 "req_id": 1 00:34:36.882 } 00:34:36.882 Got JSON-RPC error response 00:34:36.882 response: 00:34:36.882 { 00:34:36.882 "code": -110, 00:34:36.882 "message": "Connection timed out" 00:34:36.882 } 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2201748 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.882 rmmod nvme_tcp 00:34:36.882 rmmod nvme_fabrics 00:34:36.882 rmmod nvme_keyring 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2201572 ']' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2201572 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2201572 ']' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2201572 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201572 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201572' 00:34:36.882 killing process with pid 2201572 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2201572 00:34:36.882 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2201572 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.144 10:45:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:39.064 00:34:39.064 real 0m16.823s 00:34:39.064 user 0m24.031s 00:34:39.064 sys 0m4.536s 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.064 ************************************ 00:34:39.064 END TEST nvmf_host_discovery 00:34:39.064 ************************************ 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.064 ************************************ 00:34:39.064 START TEST nvmf_host_multipath_status 00:34:39.064 ************************************ 00:34:39.064 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:39.327 * Looking for test storage... 00:34:39.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.327 --rc genhtml_branch_coverage=1 00:34:39.327 --rc genhtml_function_coverage=1 00:34:39.327 --rc genhtml_legend=1 00:34:39.327 --rc geninfo_all_blocks=1 00:34:39.327 --rc geninfo_unexecuted_blocks=1 00:34:39.327 00:34:39.327 ' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.327 --rc genhtml_branch_coverage=1 00:34:39.327 --rc genhtml_function_coverage=1 00:34:39.327 --rc genhtml_legend=1 00:34:39.327 --rc geninfo_all_blocks=1 00:34:39.327 --rc geninfo_unexecuted_blocks=1 00:34:39.327 00:34:39.327 ' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.327 --rc genhtml_branch_coverage=1 00:34:39.327 --rc genhtml_function_coverage=1 00:34:39.327 --rc genhtml_legend=1 00:34:39.327 --rc geninfo_all_blocks=1 00:34:39.327 --rc geninfo_unexecuted_blocks=1 00:34:39.327 00:34:39.327 ' 00:34:39.327 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:39.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.328 --rc genhtml_branch_coverage=1 00:34:39.328 --rc genhtml_function_coverage=1 00:34:39.328 --rc genhtml_legend=1 00:34:39.328 --rc geninfo_all_blocks=1 00:34:39.328 --rc geninfo_unexecuted_blocks=1 00:34:39.328 00:34:39.328 ' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:39.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:39.328 10:45:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:42.639 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:42.639 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:42.639 Found net devices under 0000:84:00.0: cvl_0_0 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:42.639 Found net devices under 0000:84:00.1: cvl_0_1 00:34:42.639 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:42.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:34:42.640 00:34:42.640 --- 10.0.0.2 ping statistics --- 00:34:42.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.640 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:34:42.640 00:34:42.640 --- 10.0.0.1 ping statistics --- 00:34:42.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.640 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:42.640 10:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2205051 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2205051 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2205051 ']' 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.640 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.640 [2024-12-09 10:45:27.138083] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:34:42.640 [2024-12-09 10:45:27.138265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.901 [2024-12-09 10:45:27.311667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:42.901 [2024-12-09 10:45:27.426997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.901 [2024-12-09 10:45:27.427113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.901 [2024-12-09 10:45:27.427151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.901 [2024-12-09 10:45:27.427181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.901 [2024-12-09 10:45:27.427207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.901 [2024-12-09 10:45:27.430168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.901 [2024-12-09 10:45:27.430186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2205051 00:34:43.162 10:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:43.732 [2024-12-09 10:45:28.108532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.733 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:43.993 Malloc0 00:34:43.993 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:44.564 10:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.826 10:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.399 [2024-12-09 10:45:30.012809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.399 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:45.969 [2024-12-09 10:45:30.353933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2205458 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2205458 /var/tmp/bdevperf.sock 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2205458 ']' 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:45.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.969 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:46.229 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.229 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:46.229 10:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:46.799 10:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:47.375 Nvme0n1 00:34:47.375 10:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:47.946 Nvme0n1 00:34:47.946 10:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:47.946 10:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:49.861 10:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:49.861 10:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:50.432 10:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:50.692 10:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:52.074 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:52.074 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.074 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.074 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.333 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.333 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:52.333 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.333 10:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.903 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.903 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.903 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.903 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.164 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.164 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.164 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.164 10:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:54.105 10:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.105 10:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:54.105 10:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.105 10:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:54.677 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.677 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:54.677 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.677 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.249 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.249 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:55.249 10:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:55.510 10:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:56.081 10:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:57.024 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:57.024 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:57.024 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.024 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.285 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.285 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:57.285 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.285 10:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.854 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.854 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.854 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.854 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.114 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.114 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.114 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.114 10:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.685 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.685 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.685 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.685 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.944 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.944 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:58.944 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.944 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:59.204 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.204 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:59.204 10:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:59.773 10:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:00.033 10:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.413 10:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:01.673 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.673 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:01.673 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.673 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.617 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.617 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.617 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.617 10:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:02.617 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.617 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:02.617 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.617 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.561 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.561 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.561 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.561 10:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.823 10:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.823 10:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:03.823 10:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:04.396 10:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:04.968 10:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:05.916 10:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:05.916 10:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.916 10:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.916 10:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.489 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.489 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:06.489 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.489 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:07.062 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:07.062 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:07.062 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.062 10:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:07.653 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.653 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:07.653 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.653 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.222 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.222 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.222 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.222 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.484 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.484 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:08.484 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.484 10:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.056 10:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.056 10:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:09.056 10:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:09.627 10:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:10.200 10:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:11.142 10:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:11.142 10:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:11.142 10:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.142 10:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:11.713 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.713 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:11.713 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.713 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.974 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:12.546 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.546 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:12.546 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.546 10:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:12.806 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.806 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:12.806 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.806 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.376 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.376 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:13.376 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.376 10:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:13.638 10:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.638 10:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:13.638 10:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:14.211 10:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:14.803 10:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:15.748 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:15.748 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:15.748 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.748 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.023 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.023 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:16.023 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.023 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.285 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.285 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.285 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.285 10:46:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.856 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.856 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.856 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.856 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:17.116 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.116 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:17.116 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.116 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:17.376 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.376 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:17.376 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.376 10:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.947 10:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:17.947 10:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:18.519 10:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:18.519 10:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:18.779 10:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:19.040 10:46:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.424 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:20.425 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.425 10:46:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:20.993 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:20.994 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:20.994 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:20.994 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.282 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.282 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.282 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.282 10:46:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:21.855 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.855 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:21.855 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.855 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:22.115 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.115 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:22.115 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.115 10:46:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:22.688 10:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.689 10:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:22.689 10:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:22.952 10:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:23.524 10:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:24.479 10:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:24.479 10:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:24.479 10:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.480 10:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:25.062 10:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:25.062 10:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:25.062 10:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.062 10:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:25.636 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:25.636 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:25.636 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:25.636 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:26.207 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.207 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:26.207 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.207 10:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:26.781 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.781 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:26.781 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.781 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:27.042 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.042 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:27.042 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.042 10:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:27.614 10:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.614 10:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:27.614 10:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:28.185 10:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:28.447 10:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:29.391 10:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:29.391 10:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:29.391 10:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.391 10:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:30.335 10:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.335 10:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:30.335 10:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.335 10:46:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:30.595 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:30.595 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:30.595 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:30.595 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:31.165 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.165 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:31.165 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.165 10:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:31.737 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:31.737 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:31.737 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:31.737 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:32.311 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:32.311 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:32.311 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:32.311 10:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:32.881 10:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:32.881 10:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:32.881 10:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:33.453 10:46:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:34.027 10:46:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:34.975 10:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:34.975 10:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:34.975 10:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:34.975 10:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:35.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:35.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:35.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:35.547 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:36.120 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:36.120 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:36.120 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.120 10:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:36.694 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:36.694 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:36.694 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:36.694 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:37.269 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.269 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:37.269 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.269 10:46:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:37.842 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:37.842 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:37.842 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:37.842 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2205458 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2205458 ']' 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2205458 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.104 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205458 00:35:38.365 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:38.365 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:38.365 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205458' 00:35:38.365 killing process with pid 2205458 00:35:38.365 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2205458 00:35:38.365 10:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2205458 00:35:38.365 { 00:35:38.365 "results": [ 00:35:38.365 { 00:35:38.365 "job": "Nvme0n1", 00:35:38.365 "core_mask": "0x4", 00:35:38.365 "workload": "verify", 00:35:38.365 "status": "terminated", 00:35:38.365 "verify_range": { 00:35:38.365 "start": 0, 00:35:38.365 "length": 16384 00:35:38.365 }, 00:35:38.365 "queue_depth": 128, 00:35:38.365 "io_size": 4096, 00:35:38.365 "runtime": 50.094902, 00:35:38.365 "iops": 4222.824909408945, 00:35:38.365 "mibps": 16.495409802378692, 00:35:38.365 "io_failed": 0, 00:35:38.365 "io_timeout": 0, 00:35:38.365 "avg_latency_us": 30235.99761693414, 00:35:38.365 "min_latency_us": 521.8607407407408, 00:35:38.365 "max_latency_us": 6089508.02962963 00:35:38.365 } 00:35:38.365 ], 00:35:38.365 "core_count": 1 00:35:38.365 } 00:35:38.640 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2205458 00:35:38.640 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:38.641 [2024-12-09 10:45:30.445400] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:35:38.641 [2024-12-09 10:45:30.445519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205458 ] 00:35:38.641 [2024-12-09 10:45:30.558145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.641 [2024-12-09 10:45:30.655788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.641 Running I/O for 90 seconds... 00:35:38.641 4264.00 IOPS, 16.66 MiB/s [2024-12-09T09:46:23.295Z] 4324.50 IOPS, 16.89 MiB/s [2024-12-09T09:46:23.295Z] 4408.33 IOPS, 17.22 MiB/s [2024-12-09T09:46:23.295Z] 4423.00 IOPS, 17.28 MiB/s [2024-12-09T09:46:23.295Z] 4450.20 IOPS, 17.38 MiB/s [2024-12-09T09:46:23.295Z] 4466.83 IOPS, 17.45 MiB/s [2024-12-09T09:46:23.295Z] 4453.57 IOPS, 17.40 MiB/s [2024-12-09T09:46:23.295Z] 4447.25 IOPS, 17.37 MiB/s [2024-12-09T09:46:23.295Z] 4442.11 IOPS, 17.35 MiB/s [2024-12-09T09:46:23.295Z] 4464.60 IOPS, 17.44 MiB/s [2024-12-09T09:46:23.295Z] 4464.55 IOPS, 17.44 MiB/s [2024-12-09T09:46:23.295Z] 4458.08 IOPS, 17.41 MiB/s [2024-12-09T09:46:23.295Z] 4461.31 IOPS, 17.43 MiB/s [2024-12-09T09:46:23.295Z] 4465.00 IOPS, 17.44 MiB/s [2024-12-09T09:46:23.295Z] 4461.13 IOPS, 17.43 MiB/s [2024-12-09T09:46:23.295Z] 4471.69 IOPS, 17.47 MiB/s [2024-12-09T09:46:23.295Z] 4467.53 IOPS, 17.45 MiB/s [2024-12-09T09:46:23.295Z] 4466.94 IOPS, 17.45 MiB/s [2024-12-09T09:46:23.295Z] 4471.26 IOPS, 17.47 MiB/s [2024-12-09T09:46:23.295Z] 4473.65 IOPS, 17.48 MiB/s [2024-12-09T09:46:23.295Z] 4475.48 IOPS, 17.48 MiB/s [2024-12-09T09:46:23.295Z] [2024-12-09 10:45:54.028713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.641 [2024-12-09 10:45:54.028829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.028873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.028895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.028923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.028987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.029975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.031668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.031793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.031839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.031931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.031958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.032002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.032060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.032101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.032156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.032197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.641 [2024-12-09 10:45:54.032253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.641 [2024-12-09 10:45:54.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.032966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.032991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.033974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.033992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.642 [2024-12-09 10:45:54.034905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.642 [2024-12-09 10:45:54.034932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.034950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.034984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.035002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.035026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.035066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.035130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.643 [2024-12-09 10:45:54.035173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.643 [2024-12-09 10:45:54.036425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.036936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.037927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.037945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.643 [2024-12-09 10:45:54.038711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.643 [2024-12-09 10:45:54.038783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.038828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.038846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.038872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.038890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.038915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.038933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.038967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.038987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.039946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.039995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.040964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.040982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.644 [2024-12-09 10:45:54.041006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.644 [2024-12-09 10:45:54.041060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.041118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.041157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.041212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.041252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.041309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.041351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.041405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.041445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.041500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.041541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.042950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.042977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.645 [2024-12-09 10:45:54.043184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.043942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.043961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.044934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.645 [2024-12-09 10:45:54.044959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.645 [2024-12-09 10:45:54.045009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.045945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.045963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.646 [2024-12-09 10:45:54.046885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.646 [2024-12-09 10:45:54.046903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.046927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.046945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.046969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.046995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.047943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.047961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.647 [2024-12-09 10:45:54.049592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.647 [2024-12-09 10:45:54.049707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.049803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.049854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.049897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.049939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.049963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.049982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.050958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.050976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.051001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.051018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.647 [2024-12-09 10:45:54.051089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.647 [2024-12-09 10:45:54.051129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.051960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.051984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.052971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.052995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.648 [2024-12-09 10:45:54.053944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.648 [2024-12-09 10:45:54.053962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.054061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.054157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.054251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.054348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.054460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.055939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.055967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.055997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.649 [2024-12-09 10:45:54.056242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.056955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.056981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.057949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.057967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.058026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.058070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.058125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.058166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.649 [2024-12-09 10:45:54.058261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.649 [2024-12-09 10:45:54.058316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.058977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.058995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.059949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.059967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.060948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.060965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.061012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.650 [2024-12-09 10:45:54.061055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.061111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.650 [2024-12-09 10:45:54.061150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.650 [2024-12-09 10:45:54.061204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.650 [2024-12-09 10:45:54.061243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.061785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.061805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.063948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.063966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.064954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.064978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.065028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.065126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.065180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.065220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.065273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.065313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.651 [2024-12-09 10:45:54.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.651 [2024-12-09 10:45:54.065420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.065972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.065996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.066974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.066999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.067689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.067748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.068910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.068936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.068967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.069046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.069088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.069143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.069183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.069238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.652 [2024-12-09 10:45:54.069278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.069332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.652 [2024-12-09 10:45:54.069372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.652 [2024-12-09 10:45:54.069427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.069955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.069973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.070978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.070995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.071950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.071973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.072028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.072086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.072126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.072181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.653 [2024-12-09 10:45:54.072220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.653 [2024-12-09 10:45:54.072274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.072993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.073957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.073999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.654 [2024-12-09 10:45:54.074196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.654 [2024-12-09 10:45:54.074292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.074780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.654 [2024-12-09 10:45:54.076857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.654 [2024-12-09 10:45:54.076882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.076900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.076925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.076943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.076994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.077991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.078969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.655 [2024-12-09 10:45:54.078993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.655 [2024-12-09 10:45:54.079028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.079961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.079979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.080637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.080677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.081890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.081916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.081947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.081967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.081991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.656 [2024-12-09 10:45:54.082015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.656 [2024-12-09 10:45:54.082095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.657 [2024-12-09 10:45:54.082327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.082954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.082976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.083964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.083983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.084008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.084026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.084074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.084115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.084171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.657 [2024-12-09 10:45:54.084211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:38.657 [2024-12-09 10:45:54.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.084967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.084991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.085959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.085984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:38.658 [2024-12-09 10:45:54.086605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.658 [2024-12-09 10:45:54.086647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.086773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.086819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.086867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.086911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.086954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.086979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.659 [2024-12-09 10:45:54.086997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.659 [2024-12-09 10:45:54.087095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.087288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.087383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.087480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.087970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.087996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.088962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.088980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.659 [2024-12-09 10:45:54.089710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:38.659 [2024-12-09 10:45:54.089789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.089809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.089839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.089857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.089886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.089905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.089934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.089952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.089982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.090978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.090997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.091965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.091983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.092012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.092066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.092134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.092175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.660 [2024-12-09 10:45:54.092242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.660 [2024-12-09 10:45:54.092284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.092899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.092918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:45:54.093185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:45:54.093240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.661 4333.05 IOPS, 16.93 MiB/s [2024-12-09T09:46:23.315Z] 4144.65 IOPS, 16.19 MiB/s [2024-12-09T09:46:23.315Z] 3971.96 IOPS, 15.52 MiB/s [2024-12-09T09:46:23.315Z] 3813.08 IOPS, 14.89 MiB/s [2024-12-09T09:46:23.315Z] 3666.42 IOPS, 14.32 MiB/s [2024-12-09T09:46:23.315Z] 3530.63 IOPS, 13.79 MiB/s [2024-12-09T09:46:23.315Z] 3502.36 IOPS, 13.68 MiB/s [2024-12-09T09:46:23.315Z] 3537.38 IOPS, 13.82 MiB/s [2024-12-09T09:46:23.315Z] 3569.37 IOPS, 13.94 MiB/s [2024-12-09T09:46:23.315Z] 3620.03 IOPS, 14.14 MiB/s [2024-12-09T09:46:23.315Z] 3693.84 IOPS, 14.43 MiB/s [2024-12-09T09:46:23.315Z] 3765.79 IOPS, 14.71 MiB/s [2024-12-09T09:46:23.315Z] 3836.82 IOPS, 14.99 MiB/s [2024-12-09T09:46:23.315Z] 3892.69 IOPS, 15.21 MiB/s [2024-12-09T09:46:23.315Z] 3907.19 IOPS, 15.26 MiB/s [2024-12-09T09:46:23.315Z] 3920.49 IOPS, 15.31 MiB/s [2024-12-09T09:46:23.315Z] 3932.82 IOPS, 15.36 MiB/s [2024-12-09T09:46:23.315Z] 3950.51 IOPS, 15.43 MiB/s [2024-12-09T09:46:23.315Z] 3961.97 IOPS, 15.48 MiB/s [2024-12-09T09:46:23.315Z] 4000.56 IOPS, 15.63 MiB/s [2024-12-09T09:46:23.315Z] 4051.36 IOPS, 15.83 MiB/s [2024-12-09T09:46:23.315Z] 4099.88 IOPS, 16.02 MiB/s [2024-12-09T09:46:23.315Z] 4140.11 IOPS, 16.17 MiB/s [2024-12-09T09:46:23.315Z] 4185.38 IOPS, 16.35 MiB/s [2024-12-09T09:46:23.315Z] [2024-12-09 10:46:18.469704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:46:18.469845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.469921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:46:18.469947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:46:18.470063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:46:18.470687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.470954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.661 [2024-12-09 10:46:18.470982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.471054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.661 [2024-12-09 10:46:18.471098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:38.661 [2024-12-09 10:46:18.471155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.662 [2024-12-09 10:46:18.471205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.662 [2024-12-09 10:46:18.471300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.471954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.471991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.472929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.472953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.473003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.662 [2024-12-09 10:46:18.473106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.474003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.662 [2024-12-09 10:46:18.474064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.474133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.662 [2024-12-09 10:46:18.474179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:38.662 [2024-12-09 10:46:18.474237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.662 [2024-12-09 10:46:18.474279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.474372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.474582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.474791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.474837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.474881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.474923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.474966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.474991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.475038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.475096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.475137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.475192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.475232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.475287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.475328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.475395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.475439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.477300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:38.663 [2024-12-09 10:46:18.477362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.477432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.477479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.477536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.477578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.477633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.477673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.663 [2024-12-09 10:46:18.477752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:38.663 [2024-12-09 10:46:18.477807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.663 4210.76 IOPS, 16.45 MiB/s [2024-12-09T09:46:23.317Z] 4211.79 IOPS, 16.45 MiB/s [2024-12-09T09:46:23.317Z] 4217.88 IOPS, 16.48 MiB/s [2024-12-09T09:46:23.317Z] 4221.10 IOPS, 16.49 MiB/s [2024-12-09T09:46:23.317Z] 4228.28 IOPS, 16.52 MiB/s [2024-12-09T09:46:23.317Z] Received shutdown signal, test time was about 50.096496 seconds 00:35:38.663 00:35:38.663 Latency(us) 00:35:38.663 [2024-12-09T09:46:23.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.663 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:38.663 Verification LBA range: start 0x0 length 0x4000 00:35:38.663 Nvme0n1 : 50.09 4222.82 16.50 0.00 0.00 30236.00 521.86 6089508.03 00:35:38.663 [2024-12-09T09:46:23.317Z] =================================================================================================================== 00:35:38.663 [2024-12-09T09:46:23.317Z] Total : 4222.82 16.50 0.00 0.00 30236.00 521.86 6089508.03 00:35:38.663 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:39.237 rmmod nvme_tcp 00:35:39.237 rmmod nvme_fabrics 00:35:39.237 rmmod nvme_keyring 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2205051 ']' 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2205051 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2205051 ']' 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2205051 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205051 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205051' 00:35:39.237 killing process with pid 2205051 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2205051 00:35:39.237 10:46:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2205051 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.500 10:46:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.052 10:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.052 00:35:42.052 real 1m2.510s 00:35:42.052 user 3m17.586s 00:35:42.053 sys 0m15.926s 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:42.053 ************************************ 00:35:42.053 END TEST nvmf_host_multipath_status 00:35:42.053 ************************************ 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.053 ************************************ 00:35:42.053 START TEST nvmf_discovery_remove_ifc 00:35:42.053 ************************************ 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:42.053 * Looking for test storage... 00:35:42.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.053 --rc genhtml_branch_coverage=1 00:35:42.053 --rc genhtml_function_coverage=1 00:35:42.053 --rc genhtml_legend=1 00:35:42.053 --rc geninfo_all_blocks=1 00:35:42.053 --rc geninfo_unexecuted_blocks=1 00:35:42.053 00:35:42.053 ' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.053 --rc genhtml_branch_coverage=1 00:35:42.053 --rc genhtml_function_coverage=1 00:35:42.053 --rc genhtml_legend=1 00:35:42.053 --rc geninfo_all_blocks=1 00:35:42.053 --rc geninfo_unexecuted_blocks=1 00:35:42.053 00:35:42.053 ' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.053 --rc genhtml_branch_coverage=1 00:35:42.053 --rc genhtml_function_coverage=1 00:35:42.053 --rc genhtml_legend=1 00:35:42.053 --rc geninfo_all_blocks=1 00:35:42.053 --rc geninfo_unexecuted_blocks=1 00:35:42.053 00:35:42.053 ' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.053 --rc genhtml_branch_coverage=1 00:35:42.053 --rc genhtml_function_coverage=1 00:35:42.053 --rc genhtml_legend=1 00:35:42.053 --rc geninfo_all_blocks=1 00:35:42.053 --rc geninfo_unexecuted_blocks=1 00:35:42.053 00:35:42.053 ' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.053 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.054 10:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:45.360 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:45.360 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:45.360 Found net devices under 0000:84:00.0: cvl_0_0 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:45.360 Found net devices under 0000:84:00.1: cvl_0_1 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.360 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:35:45.361 00:35:45.361 --- 10.0.0.2 ping statistics --- 00:35:45.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.361 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:35:45.361 00:35:45.361 --- 10.0.0.1 ping statistics --- 00:35:45.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.361 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2213624 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2213624 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2213624 ']' 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.361 10:46:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.361 [2024-12-09 10:46:29.923766] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:35:45.361 [2024-12-09 10:46:29.923883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.622 [2024-12-09 10:46:30.022664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.622 [2024-12-09 10:46:30.094046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.622 [2024-12-09 10:46:30.094137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.622 [2024-12-09 10:46:30.094157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.622 [2024-12-09 10:46:30.094174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.622 [2024-12-09 10:46:30.094188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.622 [2024-12-09 10:46:30.095009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.622 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.881 [2024-12-09 10:46:30.292356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.881 [2024-12-09 10:46:30.301337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:45.881 null0 00:35:45.881 [2024-12-09 10:46:30.334133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2213652 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2213652 /tmp/host.sock 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2213652 ']' 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:45.881 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.881 10:46:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.881 [2024-12-09 10:46:30.449384] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:35:45.881 [2024-12-09 10:46:30.449478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213652 ] 00:35:46.143 [2024-12-09 10:46:30.598516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.143 [2024-12-09 10:46:30.708883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.085 10:46:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.473 [2024-12-09 10:46:32.756835] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:48.473 [2024-12-09 10:46:32.756863] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:48.473 [2024-12-09 10:46:32.756889] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:48.473 [2024-12-09 10:46:32.883506] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:48.473 [2024-12-09 10:46:33.104517] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:48.473 [2024-12-09 10:46:33.106118] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c9a0d0:1 started. 00:35:48.473 [2024-12-09 10:46:33.109304] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:48.473 [2024-12-09 10:46:33.109373] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:48.473 [2024-12-09 10:46:33.109418] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:48.474 [2024-12-09 10:46:33.109442] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:48.474 [2024-12-09 10:46:33.109468] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:48.474 [2024-12-09 10:46:33.114096] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c9a0d0 was disconnected and freed. delete nvme_qpair. 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.474 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:48.737 10:46:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:49.689 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.951 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:49.951 10:46:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:50.895 10:46:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:51.840 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.099 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:52.099 10:46:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:53.046 10:46:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.989 [2024-12-09 10:46:38.549924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:53.989 [2024-12-09 10:46:38.550063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.990 [2024-12-09 10:46:38.550115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.990 [2024-12-09 10:46:38.550159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.990 [2024-12-09 10:46:38.550194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.990 [2024-12-09 10:46:38.550228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.990 [2024-12-09 10:46:38.550260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.990 [2024-12-09 10:46:38.550293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.990 [2024-12-09 10:46:38.550326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.990 [2024-12-09 10:46:38.550361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:53.990 [2024-12-09 10:46:38.550392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.990 [2024-12-09 10:46:38.550424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c769e0 is same with the state(6) to be set 00:35:53.990 [2024-12-09 10:46:38.559941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c769e0 (9): Bad file descriptor 00:35:53.990 [2024-12-09 10:46:38.570002] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:53.990 [2024-12-09 10:46:38.570057] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:53.990 [2024-12-09 10:46:38.570090] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:53.990 [2024-12-09 10:46:38.570114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:53.990 [2024-12-09 10:46:38.570203] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.990 10:46:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:54.935 [2024-12-09 10:46:39.580810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:54.935 [2024-12-09 10:46:39.580936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c769e0 with addr=10.0.0.2, port=4420 00:35:54.935 [2024-12-09 10:46:39.580990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c769e0 is same with the state(6) to be set 00:35:54.935 [2024-12-09 10:46:39.581078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c769e0 (9): Bad file descriptor 00:35:54.935 [2024-12-09 10:46:39.582101] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:54.935 [2024-12-09 10:46:39.582204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:54.935 [2024-12-09 10:46:39.582248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:54.935 [2024-12-09 10:46:39.582287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:54.935 [2024-12-09 10:46:39.582321] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:54.935 [2024-12-09 10:46:39.582344] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:54.935 [2024-12-09 10:46:39.582364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:54.935 [2024-12-09 10:46:39.582396] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:54.935 [2024-12-09 10:46:39.582419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:55.195 10:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.195 10:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:55.195 10:46:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:56.231 [2024-12-09 10:46:40.584998] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:56.231 [2024-12-09 10:46:40.585084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:56.231 [2024-12-09 10:46:40.585170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:56.231 [2024-12-09 10:46:40.585209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:56.231 [2024-12-09 10:46:40.585246] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:56.232 [2024-12-09 10:46:40.585277] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:56.232 [2024-12-09 10:46:40.585301] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:56.232 [2024-12-09 10:46:40.585320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:56.232 [2024-12-09 10:46:40.585405] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:56.232 [2024-12-09 10:46:40.585508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.232 [2024-12-09 10:46:40.585562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.232 [2024-12-09 10:46:40.585605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.232 [2024-12-09 10:46:40.585638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.232 [2024-12-09 10:46:40.585672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.232 [2024-12-09 10:46:40.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.232 [2024-12-09 10:46:40.585762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.232 [2024-12-09 10:46:40.585797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.232 [2024-12-09 10:46:40.585832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:56.232 [2024-12-09 10:46:40.585864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:56.232 [2024-12-09 10:46:40.585897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:56.232 [2024-12-09 10:46:40.586006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c65d20 (9): Bad file descriptor 00:35:56.232 [2024-12-09 10:46:40.586986] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:56.232 [2024-12-09 10:46:40.587041] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:56.232 10:46:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:57.216 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:57.496 10:46:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:58.075 [2024-12-09 10:46:42.644954] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:58.075 [2024-12-09 10:46:42.645011] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:58.075 [2024-12-09 10:46:42.645068] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:58.335 [2024-12-09 10:46:42.731400] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:58.335 [2024-12-09 10:46:42.832786] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:58.335 [2024-12-09 10:46:42.834391] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c4fc20:1 started. 00:35:58.335 [2024-12-09 10:46:42.837598] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:58.335 [2024-12-09 10:46:42.837705] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:58.335 [2024-12-09 10:46:42.837806] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:58.335 [2024-12-09 10:46:42.837860] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:58.335 [2024-12-09 10:46:42.837891] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:58.335 [2024-12-09 10:46:42.883102] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c4fc20 was disconnected and freed. delete nvme_qpair. 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:58.335 10:46:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2213652 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2213652 ']' 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2213652 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213652 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213652' 00:35:58.595 killing process with pid 2213652 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2213652 00:35:58.595 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2213652 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:58.854 rmmod nvme_tcp 00:35:58.854 rmmod nvme_fabrics 00:35:58.854 rmmod nvme_keyring 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2213624 ']' 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2213624 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2213624 ']' 00:35:58.854 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2213624 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213624 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213624' 00:35:59.114 killing process with pid 2213624 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2213624 00:35:59.114 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2213624 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.374 10:46:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.912 10:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:01.912 00:36:01.912 real 0m19.700s 00:36:01.912 user 0m27.613s 00:36:01.912 sys 0m4.364s 00:36:01.912 10:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.912 10:46:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:01.912 ************************************ 00:36:01.912 END TEST nvmf_discovery_remove_ifc 00:36:01.912 ************************************ 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.912 ************************************ 00:36:01.912 START TEST nvmf_identify_kernel_target 00:36:01.912 ************************************ 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:01.912 * Looking for test storage... 00:36:01.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:01.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.912 --rc genhtml_branch_coverage=1 00:36:01.912 --rc genhtml_function_coverage=1 00:36:01.912 --rc genhtml_legend=1 00:36:01.912 --rc geninfo_all_blocks=1 00:36:01.912 --rc geninfo_unexecuted_blocks=1 00:36:01.912 00:36:01.912 ' 00:36:01.912 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.913 --rc genhtml_branch_coverage=1 00:36:01.913 --rc genhtml_function_coverage=1 00:36:01.913 --rc genhtml_legend=1 00:36:01.913 --rc geninfo_all_blocks=1 00:36:01.913 --rc geninfo_unexecuted_blocks=1 00:36:01.913 00:36:01.913 ' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.913 --rc genhtml_branch_coverage=1 00:36:01.913 --rc genhtml_function_coverage=1 00:36:01.913 --rc genhtml_legend=1 00:36:01.913 --rc geninfo_all_blocks=1 00:36:01.913 --rc geninfo_unexecuted_blocks=1 00:36:01.913 00:36:01.913 ' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:01.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:01.913 --rc genhtml_branch_coverage=1 00:36:01.913 --rc genhtml_function_coverage=1 00:36:01.913 --rc genhtml_legend=1 00:36:01.913 --rc geninfo_all_blocks=1 00:36:01.913 --rc geninfo_unexecuted_blocks=1 00:36:01.913 00:36:01.913 ' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:01.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.913 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.914 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.914 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:01.914 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:01.914 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:01.914 10:46:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:05.211 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:05.211 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:05.211 Found net devices under 0000:84:00.0: cvl_0_0 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:05.211 Found net devices under 0000:84:00.1: cvl_0_1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.211 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:05.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:36:05.212 00:36:05.212 --- 10.0.0.2 ping statistics --- 00:36:05.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.212 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:36:05.212 00:36:05.212 --- 10.0.0.1 ping statistics --- 00:36:05.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.212 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:05.212 10:46:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.591 Waiting for block devices as requested 00:36:06.851 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:36:06.851 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.111 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.111 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.111 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.371 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.371 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.371 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.629 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.629 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.629 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.629 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.888 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.888 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.888 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.888 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.146 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:08.146 No valid GPT data, bailing 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:08.146 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:08.406 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:36:08.407 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:36:08.407 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:36:08.407 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:08.407 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:36:08.407 00:36:08.407 Discovery Log Number of Records 2, Generation counter 2 00:36:08.407 =====Discovery Log Entry 0====== 00:36:08.407 trtype: tcp 00:36:08.407 adrfam: ipv4 00:36:08.407 subtype: current discovery subsystem 00:36:08.407 treq: not specified, sq flow control disable supported 00:36:08.407 portid: 1 00:36:08.407 trsvcid: 4420 00:36:08.407 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:08.407 traddr: 10.0.0.1 00:36:08.407 eflags: none 00:36:08.407 sectype: none 00:36:08.407 =====Discovery Log Entry 1====== 00:36:08.407 trtype: tcp 00:36:08.407 adrfam: ipv4 00:36:08.407 subtype: nvme subsystem 00:36:08.407 treq: not specified, sq flow control disable supported 00:36:08.407 portid: 1 00:36:08.407 trsvcid: 4420 00:36:08.407 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:08.407 traddr: 10.0.0.1 00:36:08.407 eflags: none 00:36:08.407 sectype: none 00:36:08.407 10:46:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:08.407 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:08.668 ===================================================== 00:36:08.668 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:08.668 ===================================================== 00:36:08.668 Controller Capabilities/Features 00:36:08.668 ================================ 00:36:08.668 Vendor ID: 0000 00:36:08.668 Subsystem Vendor ID: 0000 00:36:08.668 Serial Number: e816cb0f691eca3fd97d 00:36:08.668 Model Number: Linux 00:36:08.668 Firmware Version: 6.8.9-20 00:36:08.668 Recommended Arb Burst: 0 00:36:08.668 IEEE OUI Identifier: 00 00 00 00:36:08.668 Multi-path I/O 00:36:08.668 May have multiple subsystem ports: No 00:36:08.668 May have multiple controllers: No 00:36:08.668 Associated with SR-IOV VF: No 00:36:08.668 Max Data Transfer Size: Unlimited 00:36:08.668 Max Number of Namespaces: 0 00:36:08.668 Max Number of I/O Queues: 1024 00:36:08.668 NVMe Specification Version (VS): 1.3 00:36:08.668 NVMe Specification Version (Identify): 1.3 00:36:08.668 Maximum Queue Entries: 1024 00:36:08.668 Contiguous Queues Required: No 00:36:08.668 Arbitration Mechanisms Supported 00:36:08.668 Weighted Round Robin: Not Supported 00:36:08.668 Vendor Specific: Not Supported 00:36:08.668 Reset Timeout: 7500 ms 00:36:08.668 Doorbell Stride: 4 bytes 00:36:08.668 NVM Subsystem Reset: Not Supported 00:36:08.668 Command Sets Supported 00:36:08.668 NVM Command Set: Supported 00:36:08.668 Boot Partition: Not Supported 00:36:08.668 Memory Page Size Minimum: 4096 bytes 00:36:08.668 Memory Page Size Maximum: 4096 bytes 00:36:08.668 Persistent Memory Region: Not Supported 00:36:08.668 Optional Asynchronous Events Supported 00:36:08.668 Namespace Attribute Notices: Not Supported 00:36:08.668 Firmware Activation Notices: Not Supported 00:36:08.668 ANA Change Notices: Not Supported 00:36:08.668 PLE Aggregate Log Change Notices: Not Supported 00:36:08.668 LBA Status Info Alert Notices: Not Supported 00:36:08.668 EGE Aggregate Log Change Notices: Not Supported 00:36:08.668 Normal NVM Subsystem Shutdown event: Not Supported 00:36:08.668 Zone Descriptor Change Notices: Not Supported 00:36:08.668 Discovery Log Change Notices: Supported 00:36:08.668 Controller Attributes 00:36:08.668 128-bit Host Identifier: Not Supported 00:36:08.668 Non-Operational Permissive Mode: Not Supported 00:36:08.668 NVM Sets: Not Supported 00:36:08.668 Read Recovery Levels: Not Supported 00:36:08.668 Endurance Groups: Not Supported 00:36:08.668 Predictable Latency Mode: Not Supported 00:36:08.668 Traffic Based Keep ALive: Not Supported 00:36:08.668 Namespace Granularity: Not Supported 00:36:08.668 SQ Associations: Not Supported 00:36:08.668 UUID List: Not Supported 00:36:08.668 Multi-Domain Subsystem: Not Supported 00:36:08.668 Fixed Capacity Management: Not Supported 00:36:08.668 Variable Capacity Management: Not Supported 00:36:08.668 Delete Endurance Group: Not Supported 00:36:08.668 Delete NVM Set: Not Supported 00:36:08.668 Extended LBA Formats Supported: Not Supported 00:36:08.668 Flexible Data Placement Supported: Not Supported 00:36:08.668 00:36:08.668 Controller Memory Buffer Support 00:36:08.668 ================================ 00:36:08.668 Supported: No 00:36:08.668 00:36:08.668 Persistent Memory Region Support 00:36:08.668 ================================ 00:36:08.668 Supported: No 00:36:08.668 00:36:08.668 Admin Command Set Attributes 00:36:08.668 ============================ 00:36:08.668 Security Send/Receive: Not Supported 00:36:08.668 Format NVM: Not Supported 00:36:08.668 Firmware Activate/Download: Not Supported 00:36:08.668 Namespace Management: Not Supported 00:36:08.668 Device Self-Test: Not Supported 00:36:08.668 Directives: Not Supported 00:36:08.668 NVMe-MI: Not Supported 00:36:08.668 Virtualization Management: Not Supported 00:36:08.668 Doorbell Buffer Config: Not Supported 00:36:08.668 Get LBA Status Capability: Not Supported 00:36:08.668 Command & Feature Lockdown Capability: Not Supported 00:36:08.668 Abort Command Limit: 1 00:36:08.669 Async Event Request Limit: 1 00:36:08.669 Number of Firmware Slots: N/A 00:36:08.669 Firmware Slot 1 Read-Only: N/A 00:36:08.669 Firmware Activation Without Reset: N/A 00:36:08.669 Multiple Update Detection Support: N/A 00:36:08.669 Firmware Update Granularity: No Information Provided 00:36:08.669 Per-Namespace SMART Log: No 00:36:08.669 Asymmetric Namespace Access Log Page: Not Supported 00:36:08.669 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:08.669 Command Effects Log Page: Not Supported 00:36:08.669 Get Log Page Extended Data: Supported 00:36:08.669 Telemetry Log Pages: Not Supported 00:36:08.669 Persistent Event Log Pages: Not Supported 00:36:08.669 Supported Log Pages Log Page: May Support 00:36:08.669 Commands Supported & Effects Log Page: Not Supported 00:36:08.669 Feature Identifiers & Effects Log Page:May Support 00:36:08.669 NVMe-MI Commands & Effects Log Page: May Support 00:36:08.669 Data Area 4 for Telemetry Log: Not Supported 00:36:08.669 Error Log Page Entries Supported: 1 00:36:08.669 Keep Alive: Not Supported 00:36:08.669 00:36:08.669 NVM Command Set Attributes 00:36:08.669 ========================== 00:36:08.669 Submission Queue Entry Size 00:36:08.669 Max: 1 00:36:08.669 Min: 1 00:36:08.669 Completion Queue Entry Size 00:36:08.669 Max: 1 00:36:08.669 Min: 1 00:36:08.669 Number of Namespaces: 0 00:36:08.669 Compare Command: Not Supported 00:36:08.669 Write Uncorrectable Command: Not Supported 00:36:08.669 Dataset Management Command: Not Supported 00:36:08.669 Write Zeroes Command: Not Supported 00:36:08.669 Set Features Save Field: Not Supported 00:36:08.669 Reservations: Not Supported 00:36:08.669 Timestamp: Not Supported 00:36:08.669 Copy: Not Supported 00:36:08.669 Volatile Write Cache: Not Present 00:36:08.669 Atomic Write Unit (Normal): 1 00:36:08.669 Atomic Write Unit (PFail): 1 00:36:08.669 Atomic Compare & Write Unit: 1 00:36:08.669 Fused Compare & Write: Not Supported 00:36:08.669 Scatter-Gather List 00:36:08.669 SGL Command Set: Supported 00:36:08.669 SGL Keyed: Not Supported 00:36:08.669 SGL Bit Bucket Descriptor: Not Supported 00:36:08.669 SGL Metadata Pointer: Not Supported 00:36:08.669 Oversized SGL: Not Supported 00:36:08.669 SGL Metadata Address: Not Supported 00:36:08.669 SGL Offset: Supported 00:36:08.669 Transport SGL Data Block: Not Supported 00:36:08.669 Replay Protected Memory Block: Not Supported 00:36:08.669 00:36:08.669 Firmware Slot Information 00:36:08.669 ========================= 00:36:08.669 Active slot: 0 00:36:08.669 00:36:08.669 00:36:08.669 Error Log 00:36:08.669 ========= 00:36:08.669 00:36:08.669 Active Namespaces 00:36:08.669 ================= 00:36:08.669 Discovery Log Page 00:36:08.669 ================== 00:36:08.669 Generation Counter: 2 00:36:08.669 Number of Records: 2 00:36:08.669 Record Format: 0 00:36:08.669 00:36:08.669 Discovery Log Entry 0 00:36:08.669 ---------------------- 00:36:08.669 Transport Type: 3 (TCP) 00:36:08.669 Address Family: 1 (IPv4) 00:36:08.669 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:08.669 Entry Flags: 00:36:08.669 Duplicate Returned Information: 0 00:36:08.669 Explicit Persistent Connection Support for Discovery: 0 00:36:08.669 Transport Requirements: 00:36:08.669 Secure Channel: Not Specified 00:36:08.669 Port ID: 1 (0x0001) 00:36:08.669 Controller ID: 65535 (0xffff) 00:36:08.669 Admin Max SQ Size: 32 00:36:08.669 Transport Service Identifier: 4420 00:36:08.669 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:08.669 Transport Address: 10.0.0.1 00:36:08.669 Discovery Log Entry 1 00:36:08.669 ---------------------- 00:36:08.669 Transport Type: 3 (TCP) 00:36:08.669 Address Family: 1 (IPv4) 00:36:08.669 Subsystem Type: 2 (NVM Subsystem) 00:36:08.669 Entry Flags: 00:36:08.669 Duplicate Returned Information: 0 00:36:08.669 Explicit Persistent Connection Support for Discovery: 0 00:36:08.669 Transport Requirements: 00:36:08.669 Secure Channel: Not Specified 00:36:08.669 Port ID: 1 (0x0001) 00:36:08.669 Controller ID: 65535 (0xffff) 00:36:08.669 Admin Max SQ Size: 32 00:36:08.669 Transport Service Identifier: 4420 00:36:08.669 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:08.669 Transport Address: 10.0.0.1 00:36:08.669 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:08.669 get_feature(0x01) failed 00:36:08.669 get_feature(0x02) failed 00:36:08.669 get_feature(0x04) failed 00:36:08.669 ===================================================== 00:36:08.669 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:08.669 ===================================================== 00:36:08.669 Controller Capabilities/Features 00:36:08.669 ================================ 00:36:08.669 Vendor ID: 0000 00:36:08.669 Subsystem Vendor ID: 0000 00:36:08.669 Serial Number: d000a39a7bc936912cab 00:36:08.669 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:08.669 Firmware Version: 6.8.9-20 00:36:08.669 Recommended Arb Burst: 6 00:36:08.669 IEEE OUI Identifier: 00 00 00 00:36:08.669 Multi-path I/O 00:36:08.669 May have multiple subsystem ports: Yes 00:36:08.669 May have multiple controllers: Yes 00:36:08.669 Associated with SR-IOV VF: No 00:36:08.669 Max Data Transfer Size: Unlimited 00:36:08.669 Max Number of Namespaces: 1024 00:36:08.669 Max Number of I/O Queues: 128 00:36:08.669 NVMe Specification Version (VS): 1.3 00:36:08.669 NVMe Specification Version (Identify): 1.3 00:36:08.669 Maximum Queue Entries: 1024 00:36:08.669 Contiguous Queues Required: No 00:36:08.669 Arbitration Mechanisms Supported 00:36:08.669 Weighted Round Robin: Not Supported 00:36:08.669 Vendor Specific: Not Supported 00:36:08.669 Reset Timeout: 7500 ms 00:36:08.669 Doorbell Stride: 4 bytes 00:36:08.669 NVM Subsystem Reset: Not Supported 00:36:08.669 Command Sets Supported 00:36:08.669 NVM Command Set: Supported 00:36:08.669 Boot Partition: Not Supported 00:36:08.669 Memory Page Size Minimum: 4096 bytes 00:36:08.669 Memory Page Size Maximum: 4096 bytes 00:36:08.669 Persistent Memory Region: Not Supported 00:36:08.669 Optional Asynchronous Events Supported 00:36:08.669 Namespace Attribute Notices: Supported 00:36:08.669 Firmware Activation Notices: Not Supported 00:36:08.669 ANA Change Notices: Supported 00:36:08.669 PLE Aggregate Log Change Notices: Not Supported 00:36:08.669 LBA Status Info Alert Notices: Not Supported 00:36:08.669 EGE Aggregate Log Change Notices: Not Supported 00:36:08.669 Normal NVM Subsystem Shutdown event: Not Supported 00:36:08.669 Zone Descriptor Change Notices: Not Supported 00:36:08.669 Discovery Log Change Notices: Not Supported 00:36:08.669 Controller Attributes 00:36:08.669 128-bit Host Identifier: Supported 00:36:08.669 Non-Operational Permissive Mode: Not Supported 00:36:08.669 NVM Sets: Not Supported 00:36:08.669 Read Recovery Levels: Not Supported 00:36:08.669 Endurance Groups: Not Supported 00:36:08.669 Predictable Latency Mode: Not Supported 00:36:08.669 Traffic Based Keep ALive: Supported 00:36:08.669 Namespace Granularity: Not Supported 00:36:08.669 SQ Associations: Not Supported 00:36:08.669 UUID List: Not Supported 00:36:08.669 Multi-Domain Subsystem: Not Supported 00:36:08.669 Fixed Capacity Management: Not Supported 00:36:08.669 Variable Capacity Management: Not Supported 00:36:08.669 Delete Endurance Group: Not Supported 00:36:08.669 Delete NVM Set: Not Supported 00:36:08.669 Extended LBA Formats Supported: Not Supported 00:36:08.669 Flexible Data Placement Supported: Not Supported 00:36:08.669 00:36:08.669 Controller Memory Buffer Support 00:36:08.669 ================================ 00:36:08.669 Supported: No 00:36:08.669 00:36:08.669 Persistent Memory Region Support 00:36:08.669 ================================ 00:36:08.669 Supported: No 00:36:08.669 00:36:08.669 Admin Command Set Attributes 00:36:08.669 ============================ 00:36:08.669 Security Send/Receive: Not Supported 00:36:08.669 Format NVM: Not Supported 00:36:08.669 Firmware Activate/Download: Not Supported 00:36:08.669 Namespace Management: Not Supported 00:36:08.669 Device Self-Test: Not Supported 00:36:08.669 Directives: Not Supported 00:36:08.669 NVMe-MI: Not Supported 00:36:08.669 Virtualization Management: Not Supported 00:36:08.669 Doorbell Buffer Config: Not Supported 00:36:08.669 Get LBA Status Capability: Not Supported 00:36:08.669 Command & Feature Lockdown Capability: Not Supported 00:36:08.669 Abort Command Limit: 4 00:36:08.669 Async Event Request Limit: 4 00:36:08.669 Number of Firmware Slots: N/A 00:36:08.669 Firmware Slot 1 Read-Only: N/A 00:36:08.669 Firmware Activation Without Reset: N/A 00:36:08.669 Multiple Update Detection Support: N/A 00:36:08.669 Firmware Update Granularity: No Information Provided 00:36:08.669 Per-Namespace SMART Log: Yes 00:36:08.669 Asymmetric Namespace Access Log Page: Supported 00:36:08.669 ANA Transition Time : 10 sec 00:36:08.669 00:36:08.669 Asymmetric Namespace Access Capabilities 00:36:08.670 ANA Optimized State : Supported 00:36:08.670 ANA Non-Optimized State : Supported 00:36:08.670 ANA Inaccessible State : Supported 00:36:08.670 ANA Persistent Loss State : Supported 00:36:08.670 ANA Change State : Supported 00:36:08.670 ANAGRPID is not changed : No 00:36:08.670 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:08.670 00:36:08.670 ANA Group Identifier Maximum : 128 00:36:08.670 Number of ANA Group Identifiers : 128 00:36:08.670 Max Number of Allowed Namespaces : 1024 00:36:08.670 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:08.670 Command Effects Log Page: Supported 00:36:08.670 Get Log Page Extended Data: Supported 00:36:08.670 Telemetry Log Pages: Not Supported 00:36:08.670 Persistent Event Log Pages: Not Supported 00:36:08.670 Supported Log Pages Log Page: May Support 00:36:08.670 Commands Supported & Effects Log Page: Not Supported 00:36:08.670 Feature Identifiers & Effects Log Page:May Support 00:36:08.670 NVMe-MI Commands & Effects Log Page: May Support 00:36:08.670 Data Area 4 for Telemetry Log: Not Supported 00:36:08.670 Error Log Page Entries Supported: 128 00:36:08.670 Keep Alive: Supported 00:36:08.670 Keep Alive Granularity: 1000 ms 00:36:08.670 00:36:08.670 NVM Command Set Attributes 00:36:08.670 ========================== 00:36:08.670 Submission Queue Entry Size 00:36:08.670 Max: 64 00:36:08.670 Min: 64 00:36:08.670 Completion Queue Entry Size 00:36:08.670 Max: 16 00:36:08.670 Min: 16 00:36:08.670 Number of Namespaces: 1024 00:36:08.670 Compare Command: Not Supported 00:36:08.670 Write Uncorrectable Command: Not Supported 00:36:08.670 Dataset Management Command: Supported 00:36:08.670 Write Zeroes Command: Supported 00:36:08.670 Set Features Save Field: Not Supported 00:36:08.670 Reservations: Not Supported 00:36:08.670 Timestamp: Not Supported 00:36:08.670 Copy: Not Supported 00:36:08.670 Volatile Write Cache: Present 00:36:08.670 Atomic Write Unit (Normal): 1 00:36:08.670 Atomic Write Unit (PFail): 1 00:36:08.670 Atomic Compare & Write Unit: 1 00:36:08.670 Fused Compare & Write: Not Supported 00:36:08.670 Scatter-Gather List 00:36:08.670 SGL Command Set: Supported 00:36:08.670 SGL Keyed: Not Supported 00:36:08.670 SGL Bit Bucket Descriptor: Not Supported 00:36:08.670 SGL Metadata Pointer: Not Supported 00:36:08.670 Oversized SGL: Not Supported 00:36:08.670 SGL Metadata Address: Not Supported 00:36:08.670 SGL Offset: Supported 00:36:08.670 Transport SGL Data Block: Not Supported 00:36:08.670 Replay Protected Memory Block: Not Supported 00:36:08.670 00:36:08.670 Firmware Slot Information 00:36:08.670 ========================= 00:36:08.670 Active slot: 0 00:36:08.670 00:36:08.670 Asymmetric Namespace Access 00:36:08.670 =========================== 00:36:08.670 Change Count : 0 00:36:08.670 Number of ANA Group Descriptors : 1 00:36:08.670 ANA Group Descriptor : 0 00:36:08.670 ANA Group ID : 1 00:36:08.670 Number of NSID Values : 1 00:36:08.670 Change Count : 0 00:36:08.670 ANA State : 1 00:36:08.670 Namespace Identifier : 1 00:36:08.670 00:36:08.670 Commands Supported and Effects 00:36:08.670 ============================== 00:36:08.670 Admin Commands 00:36:08.670 -------------- 00:36:08.670 Get Log Page (02h): Supported 00:36:08.670 Identify (06h): Supported 00:36:08.670 Abort (08h): Supported 00:36:08.670 Set Features (09h): Supported 00:36:08.670 Get Features (0Ah): Supported 00:36:08.670 Asynchronous Event Request (0Ch): Supported 00:36:08.670 Keep Alive (18h): Supported 00:36:08.670 I/O Commands 00:36:08.670 ------------ 00:36:08.670 Flush (00h): Supported 00:36:08.670 Write (01h): Supported LBA-Change 00:36:08.670 Read (02h): Supported 00:36:08.670 Write Zeroes (08h): Supported LBA-Change 00:36:08.670 Dataset Management (09h): Supported 00:36:08.670 00:36:08.670 Error Log 00:36:08.670 ========= 00:36:08.670 Entry: 0 00:36:08.670 Error Count: 0x3 00:36:08.670 Submission Queue Id: 0x0 00:36:08.670 Command Id: 0x5 00:36:08.670 Phase Bit: 0 00:36:08.670 Status Code: 0x2 00:36:08.670 Status Code Type: 0x0 00:36:08.670 Do Not Retry: 1 00:36:08.670 Error Location: 0x28 00:36:08.670 LBA: 0x0 00:36:08.670 Namespace: 0x0 00:36:08.670 Vendor Log Page: 0x0 00:36:08.670 ----------- 00:36:08.670 Entry: 1 00:36:08.670 Error Count: 0x2 00:36:08.670 Submission Queue Id: 0x0 00:36:08.670 Command Id: 0x5 00:36:08.670 Phase Bit: 0 00:36:08.670 Status Code: 0x2 00:36:08.670 Status Code Type: 0x0 00:36:08.670 Do Not Retry: 1 00:36:08.670 Error Location: 0x28 00:36:08.670 LBA: 0x0 00:36:08.670 Namespace: 0x0 00:36:08.670 Vendor Log Page: 0x0 00:36:08.670 ----------- 00:36:08.670 Entry: 2 00:36:08.670 Error Count: 0x1 00:36:08.670 Submission Queue Id: 0x0 00:36:08.670 Command Id: 0x4 00:36:08.670 Phase Bit: 0 00:36:08.670 Status Code: 0x2 00:36:08.670 Status Code Type: 0x0 00:36:08.670 Do Not Retry: 1 00:36:08.670 Error Location: 0x28 00:36:08.670 LBA: 0x0 00:36:08.670 Namespace: 0x0 00:36:08.670 Vendor Log Page: 0x0 00:36:08.670 00:36:08.670 Number of Queues 00:36:08.670 ================ 00:36:08.670 Number of I/O Submission Queues: 128 00:36:08.670 Number of I/O Completion Queues: 128 00:36:08.670 00:36:08.670 ZNS Specific Controller Data 00:36:08.670 ============================ 00:36:08.670 Zone Append Size Limit: 0 00:36:08.670 00:36:08.670 00:36:08.670 Active Namespaces 00:36:08.670 ================= 00:36:08.670 get_feature(0x05) failed 00:36:08.670 Namespace ID:1 00:36:08.670 Command Set Identifier: NVM (00h) 00:36:08.670 Deallocate: Supported 00:36:08.670 Deallocated/Unwritten Error: Not Supported 00:36:08.670 Deallocated Read Value: Unknown 00:36:08.670 Deallocate in Write Zeroes: Not Supported 00:36:08.670 Deallocated Guard Field: 0xFFFF 00:36:08.670 Flush: Supported 00:36:08.670 Reservation: Not Supported 00:36:08.670 Namespace Sharing Capabilities: Multiple Controllers 00:36:08.670 Size (in LBAs): 1953525168 (931GiB) 00:36:08.670 Capacity (in LBAs): 1953525168 (931GiB) 00:36:08.670 Utilization (in LBAs): 1953525168 (931GiB) 00:36:08.670 UUID: 2e489c0c-58d8-4628-b0fb-2e14c6061b7c 00:36:08.670 Thin Provisioning: Not Supported 00:36:08.670 Per-NS Atomic Units: Yes 00:36:08.670 Atomic Boundary Size (Normal): 0 00:36:08.670 Atomic Boundary Size (PFail): 0 00:36:08.670 Atomic Boundary Offset: 0 00:36:08.670 NGUID/EUI64 Never Reused: No 00:36:08.670 ANA group ID: 1 00:36:08.670 Namespace Write Protected: No 00:36:08.670 Number of LBA Formats: 1 00:36:08.670 Current LBA Format: LBA Format #00 00:36:08.670 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:08.670 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.670 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.670 rmmod nvme_tcp 00:36:08.670 rmmod nvme_fabrics 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.931 10:46:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.855 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:10.856 10:46:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:12.769 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.769 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:12.769 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:13.710 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:36:13.972 00:36:13.972 real 0m12.376s 00:36:13.972 user 0m2.881s 00:36:13.972 sys 0m5.567s 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:13.972 ************************************ 00:36:13.972 END TEST nvmf_identify_kernel_target 00:36:13.972 ************************************ 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.972 ************************************ 00:36:13.972 START TEST nvmf_auth_host 00:36:13.972 ************************************ 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:13.972 * Looking for test storage... 00:36:13.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:36:13.972 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:14.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.236 --rc genhtml_branch_coverage=1 00:36:14.236 --rc genhtml_function_coverage=1 00:36:14.236 --rc genhtml_legend=1 00:36:14.236 --rc geninfo_all_blocks=1 00:36:14.236 --rc geninfo_unexecuted_blocks=1 00:36:14.236 00:36:14.236 ' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:14.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.236 --rc genhtml_branch_coverage=1 00:36:14.236 --rc genhtml_function_coverage=1 00:36:14.236 --rc genhtml_legend=1 00:36:14.236 --rc geninfo_all_blocks=1 00:36:14.236 --rc geninfo_unexecuted_blocks=1 00:36:14.236 00:36:14.236 ' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:14.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.236 --rc genhtml_branch_coverage=1 00:36:14.236 --rc genhtml_function_coverage=1 00:36:14.236 --rc genhtml_legend=1 00:36:14.236 --rc geninfo_all_blocks=1 00:36:14.236 --rc geninfo_unexecuted_blocks=1 00:36:14.236 00:36:14.236 ' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:14.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.236 --rc genhtml_branch_coverage=1 00:36:14.236 --rc genhtml_function_coverage=1 00:36:14.236 --rc genhtml_legend=1 00:36:14.236 --rc geninfo_all_blocks=1 00:36:14.236 --rc geninfo_unexecuted_blocks=1 00:36:14.236 00:36:14.236 ' 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.236 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:14.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:14.237 10:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:17.540 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:17.540 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:17.540 Found net devices under 0000:84:00.0: cvl_0_0 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:17.540 Found net devices under 0000:84:00.1: cvl_0_1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.540 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.541 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:36:17.802 00:36:17.802 --- 10.0.0.2 ping statistics --- 00:36:17.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.802 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:36:17.802 00:36:17.802 --- 10.0.0.1 ping statistics --- 00:36:17.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.802 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.802 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2221290 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2221290 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2221290 ']' 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.803 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=561b90e71db346b94fff482411812ca5 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Px7 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 561b90e71db346b94fff482411812ca5 0 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 561b90e71db346b94fff482411812ca5 0 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=561b90e71db346b94fff482411812ca5 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Px7 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Px7 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Px7 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a711cde1b1376f1beeb7a179665822078a9b1dc90efd26bd868293b20fe879f 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ijl 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a711cde1b1376f1beeb7a179665822078a9b1dc90efd26bd868293b20fe879f 3 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a711cde1b1376f1beeb7a179665822078a9b1dc90efd26bd868293b20fe879f 3 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a711cde1b1376f1beeb7a179665822078a9b1dc90efd26bd868293b20fe879f 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:18.375 10:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ijl 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ijl 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ijl 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d946c3ce70a096857fba5404e922f1fa434bcbb33218f00 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zUr 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d946c3ce70a096857fba5404e922f1fa434bcbb33218f00 0 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d946c3ce70a096857fba5404e922f1fa434bcbb33218f00 0 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d946c3ce70a096857fba5404e922f1fa434bcbb33218f00 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zUr 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zUr 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zUr 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac6f534f6f8d84b1ed4eef7d7e6e0d8c39aa55944996b81e 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.X7M 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac6f534f6f8d84b1ed4eef7d7e6e0d8c39aa55944996b81e 2 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac6f534f6f8d84b1ed4eef7d7e6e0d8c39aa55944996b81e 2 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac6f534f6f8d84b1ed4eef7d7e6e0d8c39aa55944996b81e 00:36:18.638 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.X7M 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.X7M 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.X7M 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c3c30d200b33775fb1b01e5291a1b95 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dxq 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c3c30d200b33775fb1b01e5291a1b95 1 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c3c30d200b33775fb1b01e5291a1b95 1 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c3c30d200b33775fb1b01e5291a1b95 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:18.639 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dxq 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dxq 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dxq 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=67847956fc1fba2924ecf713dc6aae69 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8v5 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 67847956fc1fba2924ecf713dc6aae69 1 00:36:18.901 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 67847956fc1fba2924ecf713dc6aae69 1 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=67847956fc1fba2924ecf713dc6aae69 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8v5 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8v5 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8v5 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19a3679c815190b7c33395ac69498ce6c814b9baf2cceae3 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YTK 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19a3679c815190b7c33395ac69498ce6c814b9baf2cceae3 2 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19a3679c815190b7c33395ac69498ce6c814b9baf2cceae3 2 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19a3679c815190b7c33395ac69498ce6c814b9baf2cceae3 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YTK 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YTK 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YTK 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ffae715390cef7b3462c2aca837be807 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.guO 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ffae715390cef7b3462c2aca837be807 0 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ffae715390cef7b3462c2aca837be807 0 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ffae715390cef7b3462c2aca837be807 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:18.902 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.guO 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.guO 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.guO 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=670fc0f29abf652556a7edb584ecde1be85eb614d25455b933c07f5822bb76d6 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ka2 00:36:19.167 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 670fc0f29abf652556a7edb584ecde1be85eb614d25455b933c07f5822bb76d6 3 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 670fc0f29abf652556a7edb584ecde1be85eb614d25455b933c07f5822bb76d6 3 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=670fc0f29abf652556a7edb584ecde1be85eb614d25455b933c07f5822bb76d6 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ka2 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ka2 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ka2 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2221290 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2221290 ']' 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.168 10:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Px7 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ijl ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ijl 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zUr 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.X7M ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.X7M 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dxq 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8v5 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8v5 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YTK 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.741 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.guO ]] 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.guO 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ka2 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.003 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:20.004 10:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:21.390 Waiting for block devices as requested 00:36:21.390 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:36:21.652 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:21.652 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:21.913 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:21.913 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:21.913 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:22.175 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:22.175 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:22.175 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:22.436 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.436 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:22.696 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:22.696 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:22.696 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:22.957 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:22.957 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:22.957 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:23.901 No valid GPT data, bailing 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:36:23.901 00:36:23.901 Discovery Log Number of Records 2, Generation counter 2 00:36:23.901 =====Discovery Log Entry 0====== 00:36:23.901 trtype: tcp 00:36:23.901 adrfam: ipv4 00:36:23.901 subtype: current discovery subsystem 00:36:23.901 treq: not specified, sq flow control disable supported 00:36:23.901 portid: 1 00:36:23.901 trsvcid: 4420 00:36:23.901 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:23.901 traddr: 10.0.0.1 00:36:23.901 eflags: none 00:36:23.901 sectype: none 00:36:23.901 =====Discovery Log Entry 1====== 00:36:23.901 trtype: tcp 00:36:23.901 adrfam: ipv4 00:36:23.901 subtype: nvme subsystem 00:36:23.901 treq: not specified, sq flow control disable supported 00:36:23.901 portid: 1 00:36:23.901 trsvcid: 4420 00:36:23.901 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:23.901 traddr: 10.0.0.1 00:36:23.901 eflags: none 00:36:23.901 sectype: none 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.901 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.163 nvme0n1 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:24.163 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.164 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.425 nvme0n1 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.425 10:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.425 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 nvme0n1 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.687 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.948 nvme0n1 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.948 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 nvme0n1 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.208 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.470 nvme0n1 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.470 10:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.470 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.471 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.731 nvme0n1 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.731 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.993 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.256 nvme0n1 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.256 10:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.517 nvme0n1 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.517 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.777 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.037 nvme0n1 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.037 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.038 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.298 nvme0n1 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.298 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.299 10:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.869 nvme0n1 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.869 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.870 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.129 nvme0n1 00:36:28.129 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.129 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.129 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.129 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.129 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:28.391 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.392 10:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.651 nvme0n1 00:36:28.651 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.651 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.651 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.651 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.651 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:28.911 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.912 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.173 nvme0n1 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.173 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.174 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.434 10:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.693 nvme0n1 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:29.693 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.694 10:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.633 nvme0n1 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.633 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.891 10:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.827 nvme0n1 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.827 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.828 10:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.768 nvme0n1 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.768 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.355 nvme0n1 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.355 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.356 10:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.292 nvme0n1 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.292 10:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.199 nvme0n1 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.199 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.200 10:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.581 nvme0n1 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.581 10:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.051 nvme0n1 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.051 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.312 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.313 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.313 10:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.694 nvme0n1 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.694 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.954 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.954 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.954 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.955 10:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.863 nvme0n1 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.863 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.864 nvme0n1 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.864 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:43.125 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.126 nvme0n1 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.126 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.385 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.386 10:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.386 nvme0n1 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.386 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.646 nvme0n1 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.646 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.907 nvme0n1 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.907 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.167 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.427 nvme0n1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.427 10:47:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.686 nvme0n1 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:44.686 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.687 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.946 nvme0n1 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.946 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.207 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.208 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.469 nvme0n1 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.469 10:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.469 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.730 nvme0n1 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.730 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.302 nvme0n1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.302 10:47:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.563 nvme0n1 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.563 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.564 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.133 nvme0n1 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.133 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.134 10:47:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.705 nvme0n1 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.705 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.290 nvme0n1 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.290 10:47:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.228 nvme0n1 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.228 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.229 10:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.169 nvme0n1 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.169 10:47:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.107 nvme0n1 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.107 10:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.050 nvme0n1 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.050 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.309 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.310 10:47:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.247 nvme0n1 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.247 10:47:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.157 nvme0n1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.157 10:47:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.066 nvme0n1 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.066 10:47:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.446 nvme0n1 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.446 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:36:58.706 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.707 10:47:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.615 nvme0n1 00:37:00.615 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.616 10:47:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 nvme0n1 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 nvme0n1 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.525 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.785 nvme0n1 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.785 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 nvme0n1 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.045 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.306 nvme0n1 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.306 10:47:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.566 nvme0n1 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.566 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.567 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.827 nvme0n1 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:03.827 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.828 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.087 nvme0n1 00:37:04.087 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.087 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.087 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.087 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.087 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.348 10:47:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 nvme0n1 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.611 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.871 nvme0n1 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.871 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.132 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.133 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.393 nvme0n1 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.394 10:47:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.966 nvme0n1 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:05.966 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.967 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.540 nvme0n1 00:37:06.540 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.540 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:06.540 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:06.540 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.540 10:47:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.540 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.110 nvme0n1 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.110 10:47:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.680 nvme0n1 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:07.680 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.681 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.249 nvme0n1 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.249 10:47:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.198 nvme0n1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.198 10:47:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.135 nvme0n1 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:10.135 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:10.395 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:10.395 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.395 10:47:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.332 nvme0n1 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.332 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.333 10:47:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 nvme0n1 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:12.274 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:12.275 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:12.275 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.275 10:47:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.218 nvme0n1 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTYxYjkwZTcxZGIzNDZiOTRmZmY0ODI0MTE4MTJjYTXujRu/: 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE3MTFjZGUxYjEzNzZmMWJlZWI3YTE3OTY2NTgyMjA3OGE5YjFkYzkwZWZkMjZiZDg2ODI5M2IyMGZlODc5ZooTE7Y=: 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.218 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.219 10:47:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.126 nvme0n1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.126 10:47:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.040 nvme0n1 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.040 10:48:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.950 nvme0n1 00:37:18.950 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTlhMzY3OWM4MTUxOTBiN2MzMzM5NWFjNjk0OThjZTZjODE0YjliYWYyY2NlYWUzfSXrug==: 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmZhZTcxNTM5MGNlZjdiMzQ2MmMyYWNhODM3YmU4MDed+Ry2: 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.951 10:48:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.855 nvme0n1 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.855 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjcwZmMwZjI5YWJmNjUyNTU2YTdlZGI1ODRlY2RlMWJlODVlYjYxNGQyNTQ1NWI5MzNjMDdmNTgyMmJiNzZkNrKdotE=: 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.114 10:48:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.022 nvme0n1 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.022 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 request: 00:37:23.023 { 00:37:23.023 "name": "nvme0", 00:37:23.023 "trtype": "tcp", 00:37:23.023 "traddr": "10.0.0.1", 00:37:23.023 "adrfam": "ipv4", 00:37:23.023 "trsvcid": "4420", 00:37:23.023 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:23.023 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:23.023 "prchk_reftag": false, 00:37:23.023 "prchk_guard": false, 00:37:23.023 "hdgst": false, 00:37:23.023 "ddgst": false, 00:37:23.023 "allow_unrecognized_csi": false, 00:37:23.023 "method": "bdev_nvme_attach_controller", 00:37:23.023 "req_id": 1 00:37:23.023 } 00:37:23.023 Got JSON-RPC error response 00:37:23.023 response: 00:37:23.023 { 00:37:23.023 "code": -5, 00:37:23.023 "message": "Input/output error" 00:37:23.023 } 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 request: 00:37:23.023 { 00:37:23.023 "name": "nvme0", 00:37:23.023 "trtype": "tcp", 00:37:23.023 "traddr": "10.0.0.1", 00:37:23.023 "adrfam": "ipv4", 00:37:23.023 "trsvcid": "4420", 00:37:23.023 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:23.023 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:23.023 "prchk_reftag": false, 00:37:23.023 "prchk_guard": false, 00:37:23.023 "hdgst": false, 00:37:23.023 "ddgst": false, 00:37:23.023 "dhchap_key": "key2", 00:37:23.023 "allow_unrecognized_csi": false, 00:37:23.023 "method": "bdev_nvme_attach_controller", 00:37:23.023 "req_id": 1 00:37:23.023 } 00:37:23.023 Got JSON-RPC error response 00:37:23.023 response: 00:37:23.023 { 00:37:23.023 "code": -5, 00:37:23.023 "message": "Input/output error" 00:37:23.023 } 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:23.023 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.283 request: 00:37:23.283 { 00:37:23.283 "name": "nvme0", 00:37:23.283 "trtype": "tcp", 00:37:23.283 "traddr": "10.0.0.1", 00:37:23.283 "adrfam": "ipv4", 00:37:23.283 "trsvcid": "4420", 00:37:23.283 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:23.283 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:23.283 "prchk_reftag": false, 00:37:23.283 "prchk_guard": false, 00:37:23.283 "hdgst": false, 00:37:23.283 "ddgst": false, 00:37:23.283 "dhchap_key": "key1", 00:37:23.283 "dhchap_ctrlr_key": "ckey2", 00:37:23.283 "allow_unrecognized_csi": false, 00:37:23.283 "method": "bdev_nvme_attach_controller", 00:37:23.283 "req_id": 1 00:37:23.283 } 00:37:23.283 Got JSON-RPC error response 00:37:23.283 response: 00:37:23.283 { 00:37:23.283 "code": -5, 00:37:23.283 "message": "Input/output error" 00:37:23.283 } 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.283 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.284 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.543 nvme0n1 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:23.543 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:23.544 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:23.544 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:23.544 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.544 10:48:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.544 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.544 request: 00:37:23.544 { 00:37:23.544 "name": "nvme0", 00:37:23.544 "dhchap_key": "key1", 00:37:23.544 "dhchap_ctrlr_key": "ckey2", 00:37:23.544 "method": "bdev_nvme_set_keys", 00:37:23.803 "req_id": 1 00:37:23.803 } 00:37:23.803 Got JSON-RPC error response 00:37:23.803 response: 00:37:23.803 { 00:37:23.803 "code": -13, 00:37:23.803 "message": "Permission denied" 00:37:23.803 } 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.803 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:23.804 10:48:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:24.743 10:48:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:25.680 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.680 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.680 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.680 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:25.680 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q5NDZjM2NlNzBhMDk2ODU3ZmJhNTQwNGU5MjJmMWZhNDM0YmNiYjMzMjE4ZjAwd9ZtfA==: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWM2ZjUzNGY2ZjhkODRiMWVkNGVlZjdkN2U2ZTBkOGMzOWFhNTU5NDQ5OTZiODFlwlD8Bg==: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.940 nvme0n1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGMzYzMwZDIwMGIzMzc3NWZiMWIwMWU1MjkxYTFiOTVvlTbQ: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: ]] 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Njc4NDc5NTZmYzFmYmEyOTI0ZWNmNzEzZGM2YWFlNjlzGIeH: 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.940 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.202 request: 00:37:26.202 { 00:37:26.202 "name": "nvme0", 00:37:26.202 "dhchap_key": "key2", 00:37:26.202 "dhchap_ctrlr_key": "ckey1", 00:37:26.202 "method": "bdev_nvme_set_keys", 00:37:26.202 "req_id": 1 00:37:26.202 } 00:37:26.202 Got JSON-RPC error response 00:37:26.202 response: 00:37:26.202 { 00:37:26.202 "code": -13, 00:37:26.202 "message": "Permission denied" 00:37:26.202 } 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:26.202 10:48:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.144 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.144 rmmod nvme_tcp 00:37:27.404 rmmod nvme_fabrics 00:37:27.404 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2221290 ']' 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2221290 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2221290 ']' 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2221290 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221290 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221290' 00:37:27.405 killing process with pid 2221290 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2221290 00:37:27.405 10:48:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2221290 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.665 10:48:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:30.208 10:48:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:31.587 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:31.587 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:31.587 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:32.527 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:37:32.787 10:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Px7 /tmp/spdk.key-null.zUr /tmp/spdk.key-sha256.dxq /tmp/spdk.key-sha384.YTK /tmp/spdk.key-sha512.ka2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:32.787 10:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:34.169 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:34.169 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:34.169 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:34.169 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:34.169 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:34.169 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:34.169 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:34.169 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:34.169 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:34.169 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:37:34.169 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:37:34.169 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:37:34.169 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:37:34.169 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:37:34.169 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:37:34.169 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:37:34.169 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:37:34.428 00:37:34.428 real 1m20.411s 00:37:34.428 user 1m18.950s 00:37:34.428 sys 0m9.505s 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.428 ************************************ 00:37:34.428 END TEST nvmf_auth_host 00:37:34.428 ************************************ 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.428 ************************************ 00:37:34.428 START TEST nvmf_digest 00:37:34.428 ************************************ 00:37:34.428 10:48:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:34.428 * Looking for test storage... 00:37:34.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:34.428 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:34.428 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:37:34.428 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.687 --rc genhtml_branch_coverage=1 00:37:34.687 --rc genhtml_function_coverage=1 00:37:34.687 --rc genhtml_legend=1 00:37:34.687 --rc geninfo_all_blocks=1 00:37:34.687 --rc geninfo_unexecuted_blocks=1 00:37:34.687 00:37:34.687 ' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.687 --rc genhtml_branch_coverage=1 00:37:34.687 --rc genhtml_function_coverage=1 00:37:34.687 --rc genhtml_legend=1 00:37:34.687 --rc geninfo_all_blocks=1 00:37:34.687 --rc geninfo_unexecuted_blocks=1 00:37:34.687 00:37:34.687 ' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.687 --rc genhtml_branch_coverage=1 00:37:34.687 --rc genhtml_function_coverage=1 00:37:34.687 --rc genhtml_legend=1 00:37:34.687 --rc geninfo_all_blocks=1 00:37:34.687 --rc geninfo_unexecuted_blocks=1 00:37:34.687 00:37:34.687 ' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:34.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.687 --rc genhtml_branch_coverage=1 00:37:34.687 --rc genhtml_function_coverage=1 00:37:34.687 --rc genhtml_legend=1 00:37:34.687 --rc geninfo_all_blocks=1 00:37:34.687 --rc geninfo_unexecuted_blocks=1 00:37:34.687 00:37:34.687 ' 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.687 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:34.688 10:48:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.988 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:37.989 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:37.989 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:37.989 Found net devices under 0000:84:00.0: cvl_0_0 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:37.989 Found net devices under 0000:84:00.1: cvl_0_1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:37:37.989 00:37:37.989 --- 10.0.0.2 ping statistics --- 00:37:37.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.989 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:37:37.989 00:37:37.989 --- 10.0.0.1 ping statistics --- 00:37:37.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.989 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:37.989 ************************************ 00:37:37.989 START TEST nvmf_digest_clean 00:37:37.989 ************************************ 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2234467 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2234467 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2234467 ']' 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.989 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:37.989 [2024-12-09 10:48:22.333413] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:37.989 [2024-12-09 10:48:22.333510] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.989 [2024-12-09 10:48:22.464253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.989 [2024-12-09 10:48:22.580637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.989 [2024-12-09 10:48:22.580770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.989 [2024-12-09 10:48:22.580812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.989 [2024-12-09 10:48:22.580842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.989 [2024-12-09 10:48:22.580869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.989 [2024-12-09 10:48:22.582189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.251 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:38.251 null0 00:37:38.251 [2024-12-09 10:48:22.882528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.511 [2024-12-09 10:48:22.906916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2234501 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2234501 /var/tmp/bperf.sock 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2234501 ']' 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.511 10:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:38.511 [2024-12-09 10:48:23.012221] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:38.511 [2024-12-09 10:48:23.012397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234501 ] 00:37:38.771 [2024-12-09 10:48:23.180825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.771 [2024-12-09 10:48:23.303012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.029 10:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.029 10:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:39.029 10:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:39.029 10:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:39.029 10:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:39.597 10:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:39.597 10:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:40.168 nvme0n1 00:37:40.168 10:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:40.168 10:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:40.430 Running I/O for 2 seconds... 00:37:42.315 7544.00 IOPS, 29.47 MiB/s [2024-12-09T09:48:27.232Z] 7587.50 IOPS, 29.64 MiB/s 00:37:42.578 Latency(us) 00:37:42.578 [2024-12-09T09:48:27.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.578 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:42.578 nvme0n1 : 2.06 7428.94 29.02 0.00 0.00 16855.07 8107.05 58642.58 00:37:42.578 [2024-12-09T09:48:27.232Z] =================================================================================================================== 00:37:42.578 [2024-12-09T09:48:27.232Z] Total : 7428.94 29.02 0.00 0.00 16855.07 8107.05 58642.58 00:37:42.578 { 00:37:42.578 "results": [ 00:37:42.578 { 00:37:42.578 "job": "nvme0n1", 00:37:42.578 "core_mask": "0x2", 00:37:42.578 "workload": "randread", 00:37:42.578 "status": "finished", 00:37:42.578 "queue_depth": 128, 00:37:42.578 "io_size": 4096, 00:37:42.578 "runtime": 2.059917, 00:37:42.578 "iops": 7428.940098071913, 00:37:42.578 "mibps": 29.01929725809341, 00:37:42.578 "io_failed": 0, 00:37:42.578 "io_timeout": 0, 00:37:42.578 "avg_latency_us": 16855.068798226443, 00:37:42.578 "min_latency_us": 8107.045925925926, 00:37:42.578 "max_latency_us": 58642.583703703705 00:37:42.578 } 00:37:42.578 ], 00:37:42.578 "core_count": 1 00:37:42.578 } 00:37:42.578 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:42.578 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:42.578 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:42.578 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:42.578 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:42.578 | select(.opcode=="crc32c") 00:37:42.578 | "\(.module_name) \(.executed)"' 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2234501 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2234501 ']' 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2234501 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234501 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234501' 00:37:43.149 killing process with pid 2234501 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2234501 00:37:43.149 Received shutdown signal, test time was about 2.000000 seconds 00:37:43.149 00:37:43.149 Latency(us) 00:37:43.149 [2024-12-09T09:48:27.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.149 [2024-12-09T09:48:27.803Z] =================================================================================================================== 00:37:43.149 [2024-12-09T09:48:27.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.149 10:48:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2234501 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2235151 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2235151 /var/tmp/bperf.sock 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2235151 ']' 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:43.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.410 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:43.670 [2024-12-09 10:48:28.109486] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:43.670 [2024-12-09 10:48:28.109592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235151 ] 00:37:43.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:43.670 Zero copy mechanism will not be used. 00:37:43.670 [2024-12-09 10:48:28.246289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.930 [2024-12-09 10:48:28.361062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.930 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.930 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:43.930 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:43.930 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:43.930 10:48:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:44.873 10:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:44.873 10:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.133 nvme0n1 00:37:45.133 10:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:45.133 10:48:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.393 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:45.393 Zero copy mechanism will not be used. 00:37:45.393 Running I/O for 2 seconds... 00:37:47.272 2626.00 IOPS, 328.25 MiB/s [2024-12-09T09:48:31.927Z] 2657.00 IOPS, 332.12 MiB/s 00:37:47.273 Latency(us) 00:37:47.273 [2024-12-09T09:48:31.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.273 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:47.273 nvme0n1 : 2.01 2654.65 331.83 0.00 0.00 6016.95 2402.99 11942.12 00:37:47.273 [2024-12-09T09:48:31.927Z] =================================================================================================================== 00:37:47.273 [2024-12-09T09:48:31.927Z] Total : 2654.65 331.83 0.00 0.00 6016.95 2402.99 11942.12 00:37:47.273 { 00:37:47.273 "results": [ 00:37:47.273 { 00:37:47.273 "job": "nvme0n1", 00:37:47.273 "core_mask": "0x2", 00:37:47.273 "workload": "randread", 00:37:47.273 "status": "finished", 00:37:47.273 "queue_depth": 16, 00:37:47.273 "io_size": 131072, 00:37:47.273 "runtime": 2.007801, 00:37:47.273 "iops": 2654.645555012673, 00:37:47.273 "mibps": 331.8306943765841, 00:37:47.273 "io_failed": 0, 00:37:47.273 "io_timeout": 0, 00:37:47.273 "avg_latency_us": 6016.950282815649, 00:37:47.273 "min_latency_us": 2402.9866666666667, 00:37:47.273 "max_latency_us": 11942.115555555556 00:37:47.273 } 00:37:47.273 ], 00:37:47.273 "core_count": 1 00:37:47.273 } 00:37:47.273 10:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:47.273 10:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:47.273 10:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:47.273 10:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:47.273 10:48:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:47.273 | select(.opcode=="crc32c") 00:37:47.273 | "\(.module_name) \(.executed)"' 00:37:47.841 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:47.841 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:47.841 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:47.841 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2235151 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2235151 ']' 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2235151 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235151 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235151' 00:37:47.842 killing process with pid 2235151 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2235151 00:37:47.842 Received shutdown signal, test time was about 2.000000 seconds 00:37:47.842 00:37:47.842 Latency(us) 00:37:47.842 [2024-12-09T09:48:32.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.842 [2024-12-09T09:48:32.496Z] =================================================================================================================== 00:37:47.842 [2024-12-09T09:48:32.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.842 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2235151 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2235588 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2235588 /var/tmp/bperf.sock 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2235588 ']' 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:48.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.102 10:48:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:48.102 [2024-12-09 10:48:32.688585] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:48.102 [2024-12-09 10:48:32.688697] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235588 ] 00:37:48.362 [2024-12-09 10:48:32.809830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.362 [2024-12-09 10:48:32.926436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.622 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.622 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:48.622 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:48.622 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:48.622 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:49.189 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:49.189 10:48:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:49.756 nvme0n1 00:37:49.756 10:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:49.756 10:48:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:50.015 Running I/O for 2 seconds... 00:37:51.904 9174.00 IOPS, 35.84 MiB/s [2024-12-09T09:48:36.558Z] 8993.00 IOPS, 35.13 MiB/s 00:37:51.904 Latency(us) 00:37:51.904 [2024-12-09T09:48:36.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.904 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.904 nvme0n1 : 2.01 8994.63 35.14 0.00 0.00 14205.87 5170.06 23787.14 00:37:51.904 [2024-12-09T09:48:36.558Z] =================================================================================================================== 00:37:51.904 [2024-12-09T09:48:36.558Z] Total : 8994.63 35.14 0.00 0.00 14205.87 5170.06 23787.14 00:37:51.904 { 00:37:51.904 "results": [ 00:37:51.904 { 00:37:51.904 "job": "nvme0n1", 00:37:51.904 "core_mask": "0x2", 00:37:51.904 "workload": "randwrite", 00:37:51.904 "status": "finished", 00:37:51.904 "queue_depth": 128, 00:37:51.904 "io_size": 4096, 00:37:51.904 "runtime": 2.013869, 00:37:51.904 "iops": 8994.626760727733, 00:37:51.904 "mibps": 35.13526078409271, 00:37:51.904 "io_failed": 0, 00:37:51.904 "io_timeout": 0, 00:37:51.904 "avg_latency_us": 14205.869812504347, 00:37:51.904 "min_latency_us": 5170.062222222222, 00:37:51.904 "max_latency_us": 23787.140740740742 00:37:51.904 } 00:37:51.904 ], 00:37:51.904 "core_count": 1 00:37:51.904 } 00:37:52.163 10:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:52.163 10:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:52.163 10:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:52.163 10:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:52.163 | select(.opcode=="crc32c") 00:37:52.163 | "\(.module_name) \(.executed)"' 00:37:52.163 10:48:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2235588 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2235588 ']' 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2235588 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235588 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235588' 00:37:52.422 killing process with pid 2235588 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2235588 00:37:52.422 Received shutdown signal, test time was about 2.000000 seconds 00:37:52.422 00:37:52.422 Latency(us) 00:37:52.422 [2024-12-09T09:48:37.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.422 [2024-12-09T09:48:37.076Z] =================================================================================================================== 00:37:52.422 [2024-12-09T09:48:37.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:52.422 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2235588 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2236104 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2236104 /var/tmp/bperf.sock 00:37:52.987 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2236104 ']' 00:37:52.988 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:52.988 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.988 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:52.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:52.988 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.988 10:48:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:52.988 [2024-12-09 10:48:37.421128] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:52.988 [2024-12-09 10:48:37.421221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236104 ] 00:37:52.988 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:52.988 Zero copy mechanism will not be used. 00:37:52.988 [2024-12-09 10:48:37.590153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.248 [2024-12-09 10:48:37.739746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.507 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.507 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:53.507 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:53.507 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:53.507 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:54.074 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:54.074 10:48:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:54.640 nvme0n1 00:37:54.900 10:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:54.900 10:48:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:54.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:54.900 Zero copy mechanism will not be used. 00:37:54.900 Running I/O for 2 seconds... 00:37:57.217 2601.00 IOPS, 325.12 MiB/s [2024-12-09T09:48:41.871Z] 2567.00 IOPS, 320.88 MiB/s 00:37:57.217 Latency(us) 00:37:57.217 [2024-12-09T09:48:41.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.217 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:57.217 nvme0n1 : 2.01 2562.83 320.35 0.00 0.00 6222.82 3070.48 12718.84 00:37:57.217 [2024-12-09T09:48:41.871Z] =================================================================================================================== 00:37:57.217 [2024-12-09T09:48:41.871Z] Total : 2562.83 320.35 0.00 0.00 6222.82 3070.48 12718.84 00:37:57.217 { 00:37:57.217 "results": [ 00:37:57.217 { 00:37:57.217 "job": "nvme0n1", 00:37:57.217 "core_mask": "0x2", 00:37:57.217 "workload": "randwrite", 00:37:57.217 "status": "finished", 00:37:57.217 "queue_depth": 16, 00:37:57.217 "io_size": 131072, 00:37:57.217 "runtime": 2.010275, 00:37:57.217 "iops": 2562.8334431856338, 00:37:57.217 "mibps": 320.3541803982042, 00:37:57.217 "io_failed": 0, 00:37:57.217 "io_timeout": 0, 00:37:57.217 "avg_latency_us": 6222.81996779388, 00:37:57.217 "min_latency_us": 3070.482962962963, 00:37:57.217 "max_latency_us": 12718.838518518518 00:37:57.217 } 00:37:57.217 ], 00:37:57.217 "core_count": 1 00:37:57.217 } 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:57.217 | select(.opcode=="crc32c") 00:37:57.217 | "\(.module_name) \(.executed)"' 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2236104 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2236104 ']' 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2236104 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236104 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236104' 00:37:57.217 killing process with pid 2236104 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2236104 00:37:57.217 Received shutdown signal, test time was about 2.000000 seconds 00:37:57.217 00:37:57.217 Latency(us) 00:37:57.217 [2024-12-09T09:48:41.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.217 [2024-12-09T09:48:41.871Z] =================================================================================================================== 00:37:57.217 [2024-12-09T09:48:41.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:57.217 10:48:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2236104 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2234467 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2234467 ']' 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2234467 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234467 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234467' 00:37:57.787 killing process with pid 2234467 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2234467 00:37:57.787 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2234467 00:37:58.046 00:37:58.046 real 0m20.423s 00:37:58.046 user 0m42.694s 00:37:58.046 sys 0m5.599s 00:37:58.046 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.046 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:58.046 ************************************ 00:37:58.046 END TEST nvmf_digest_clean 00:37:58.046 ************************************ 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:58.308 ************************************ 00:37:58.308 START TEST nvmf_digest_error 00:37:58.308 ************************************ 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2236782 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2236782 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2236782 ']' 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.308 10:48:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:58.308 [2024-12-09 10:48:42.864973] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:58.308 [2024-12-09 10:48:42.865096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.569 [2024-12-09 10:48:43.027774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.569 [2024-12-09 10:48:43.140197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.569 [2024-12-09 10:48:43.140338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.569 [2024-12-09 10:48:43.140375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.569 [2024-12-09 10:48:43.140407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.569 [2024-12-09 10:48:43.140435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.569 [2024-12-09 10:48:43.141825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:58.829 [2024-12-09 10:48:43.407457] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.829 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:59.089 null0 00:37:59.089 [2024-12-09 10:48:43.601957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.089 [2024-12-09 10:48:43.626345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2236931 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2236931 /var/tmp/bperf.sock 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2236931 ']' 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:59.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.089 10:48:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:59.089 [2024-12-09 10:48:43.733186] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:37:59.089 [2024-12-09 10:48:43.733350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236931 ] 00:37:59.348 [2024-12-09 10:48:43.894764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.607 [2024-12-09 10:48:44.009475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:00.991 10:48:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:01.564 nvme0n1 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:01.825 10:48:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:02.085 Running I/O for 2 seconds... 00:38:02.085 [2024-12-09 10:48:46.541942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.085 [2024-12-09 10:48:46.542051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.542105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.086 [2024-12-09 10:48:46.577208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.086 [2024-12-09 10:48:46.577291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.577337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.086 [2024-12-09 10:48:46.615564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.086 [2024-12-09 10:48:46.615645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.086 [2024-12-09 10:48:46.643526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.086 [2024-12-09 10:48:46.643605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.643649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.086 [2024-12-09 10:48:46.684492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.086 [2024-12-09 10:48:46.684572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.684618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.086 [2024-12-09 10:48:46.721061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.086 [2024-12-09 10:48:46.721140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.086 [2024-12-09 10:48:46.721184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.758133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.758256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.788332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.788410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.788453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.819753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.819831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.819875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.850067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.850146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.850190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.881101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.881179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.881223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.913674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.913779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.913826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.946541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.347 [2024-12-09 10:48:46.973541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.347 [2024-12-09 10:48:46.973619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.347 [2024-12-09 10:48:46.973678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.608 [2024-12-09 10:48:47.005082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.608 [2024-12-09 10:48:47.005118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.005138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.037612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.037690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.037763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.068012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.068111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.068155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.106016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.106096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.106141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.138435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.138516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.138560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.172630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.172709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.172783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.201829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.201906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.201950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.609 [2024-12-09 10:48:47.237120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.609 [2024-12-09 10:48:47.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.609 [2024-12-09 10:48:47.237242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.274777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.274814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.274834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.307300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.307389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.307433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.339394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.339473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.339518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.363657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.363751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.363801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.394552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.394633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.394677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.427370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.427448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.427491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 [2024-12-09 10:48:47.463072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.463152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.463196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:02.870 7611.00 IOPS, 29.73 MiB/s [2024-12-09T09:48:47.524Z] [2024-12-09 10:48:47.502030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:02.870 [2024-12-09 10:48:47.502107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.870 [2024-12-09 10:48:47.502151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.539585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.131 [2024-12-09 10:48:47.539664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.131 [2024-12-09 10:48:47.539737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.576374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.131 [2024-12-09 10:48:47.576454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.131 [2024-12-09 10:48:47.576498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.601263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.131 [2024-12-09 10:48:47.601341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.131 [2024-12-09 10:48:47.601385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.633661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.131 [2024-12-09 10:48:47.633756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.131 [2024-12-09 10:48:47.633802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.664177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.131 [2024-12-09 10:48:47.664257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.131 [2024-12-09 10:48:47.664302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.131 [2024-12-09 10:48:47.704111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.132 [2024-12-09 10:48:47.704191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.132 [2024-12-09 10:48:47.704235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.132 [2024-12-09 10:48:47.743227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.132 [2024-12-09 10:48:47.743306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.132 [2024-12-09 10:48:47.743351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.132 [2024-12-09 10:48:47.782656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.132 [2024-12-09 10:48:47.782752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.132 [2024-12-09 10:48:47.782802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.819863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.819942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.819986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.854864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.854964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.855010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.898228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.898305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.898348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.932437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.932518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.932563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.969187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.969264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.969308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:47.995294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:47.995393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:47.995437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.393 [2024-12-09 10:48:48.031964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.393 [2024-12-09 10:48:48.032041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.393 [2024-12-09 10:48:48.032086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.064357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.064436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.064479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.095036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.095115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.095158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.129491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.129570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.129615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.153471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.153550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.153594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.178846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.178882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.178902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.209873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.209953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.210007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.244049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.654 [2024-12-09 10:48:48.244127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.654 [2024-12-09 10:48:48.244173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.654 [2024-12-09 10:48:48.283868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.655 [2024-12-09 10:48:48.283944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.655 [2024-12-09 10:48:48.283987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.319269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.319348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.319394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.350279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.350356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.350402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.378801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.378875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.378919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.409919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.409994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.410054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.442892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.442972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.443016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.916 [2024-12-09 10:48:48.472575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.916 [2024-12-09 10:48:48.472651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.916 [2024-12-09 10:48:48.472694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.917 7627.00 IOPS, 29.79 MiB/s [2024-12-09T09:48:48.571Z] [2024-12-09 10:48:48.512177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1748e60) 00:38:03.917 [2024-12-09 10:48:48.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:03.917 [2024-12-09 10:48:48.512295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:03.917 00:38:03.917 Latency(us) 00:38:03.917 [2024-12-09T09:48:48.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:03.917 nvme0n1 : 2.02 7616.75 29.75 0.00 0.00 16768.06 5655.51 60196.03 00:38:03.917 [2024-12-09T09:48:48.571Z] =================================================================================================================== 00:38:03.917 [2024-12-09T09:48:48.571Z] Total : 7616.75 29.75 0.00 0.00 16768.06 5655.51 60196.03 00:38:03.917 { 00:38:03.917 "results": [ 00:38:03.917 { 00:38:03.917 "job": "nvme0n1", 00:38:03.917 "core_mask": "0x2", 00:38:03.917 "workload": "randread", 00:38:03.917 "status": "finished", 00:38:03.917 "queue_depth": 128, 00:38:03.917 "io_size": 4096, 00:38:03.917 "runtime": 2.019496, 00:38:03.917 "iops": 7616.75190245487, 00:38:03.917 "mibps": 29.752937118964336, 00:38:03.917 "io_failed": 0, 00:38:03.917 "io_timeout": 0, 00:38:03.917 "avg_latency_us": 16768.06146751614, 00:38:03.917 "min_latency_us": 5655.514074074074, 00:38:03.917 "max_latency_us": 60196.02962962963 00:38:03.917 } 00:38:03.917 ], 00:38:03.917 "core_count": 1 00:38:03.917 } 00:38:03.917 10:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:03.917 10:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:03.917 | .driver_specific 00:38:03.917 | .nvme_error 00:38:03.917 | .status_code 00:38:03.917 | .command_transient_transport_error' 00:38:03.917 10:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:03.917 10:48:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 60 > 0 )) 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2236931 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2236931 ']' 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2236931 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236931 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236931' 00:38:04.863 killing process with pid 2236931 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2236931 00:38:04.863 Received shutdown signal, test time was about 2.000000 seconds 00:38:04.863 00:38:04.863 Latency(us) 00:38:04.863 [2024-12-09T09:48:49.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.863 [2024-12-09T09:48:49.517Z] =================================================================================================================== 00:38:04.863 [2024-12-09T09:48:49.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:04.863 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2236931 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2237598 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2237598 /var/tmp/bperf.sock 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2237598 ']' 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:05.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:05.125 10:48:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:05.125 [2024-12-09 10:48:49.614378] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:05.125 [2024-12-09 10:48:49.614482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237598 ] 00:38:05.125 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:05.125 Zero copy mechanism will not be used. 00:38:05.125 [2024-12-09 10:48:49.752294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.386 [2024-12-09 10:48:49.873944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.647 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.647 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:38:05.647 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:05.647 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:06.216 10:48:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:06.782 nvme0n1 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:06.782 10:48:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:07.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:07.042 Zero copy mechanism will not be used. 00:38:07.042 Running I/O for 2 seconds... 00:38:07.042 [2024-12-09 10:48:51.509087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.509193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.509246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.521110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.521190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.521235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.533098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.533178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.533223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.544946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.545023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.545067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.556923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.556999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.557043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.568921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.568996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.569040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.581426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.581503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.581545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.593617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.593692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.593754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.607480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.607558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.607601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.621256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.621340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.621386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.634989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.635071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.635117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.649314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.649395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.649441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.663846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.663929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.663998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.678001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.678059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.042 [2024-12-09 10:48:51.689747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.042 [2024-12-09 10:48:51.689811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.042 [2024-12-09 10:48:51.689830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.701947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.702026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.714235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.714313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.714359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.724672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.724777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.724798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.735598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.748003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.748040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.748097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.759066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.759101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.765347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.765439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.774849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.774892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.774915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.787227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.787309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.787355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.301 [2024-12-09 10:48:51.799588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.301 [2024-12-09 10:48:51.799667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.301 [2024-12-09 10:48:51.799710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.812412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.812490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.812535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.824937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.824973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.824993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.836886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.836922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.836942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.848691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.848794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.848815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.860111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.860186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.860228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.872146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.872221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.872265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.883809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.883885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.883929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.896629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.896709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.896777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.908856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.908935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.908981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.921142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.921217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.921261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.934085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.934165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.934211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.302 [2024-12-09 10:48:51.946500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.302 [2024-12-09 10:48:51.946578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.302 [2024-12-09 10:48:51.946622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:51.958474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:51.958554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:51.958597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:51.970798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:51.970833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:51.970864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:51.982465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:51.982543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:51.982587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:51.994103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:51.994180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:51.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.002888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.002924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.002943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.013597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.013675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.013719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.024540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.024658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.036453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.036535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.036582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.049585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.049665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.049711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.062687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.062779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.062826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.075892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.075985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.076030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.090082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.090161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.090206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.103263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.103341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.103384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.117204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.117281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.117327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.131029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.131149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.144384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.144461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.144505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.158711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.158807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.158852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.172107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.172186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.185572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.562 [2024-12-09 10:48:52.185650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.562 [2024-12-09 10:48:52.185694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.562 [2024-12-09 10:48:52.198334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.563 [2024-12-09 10:48:52.198412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.563 [2024-12-09 10:48:52.198456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.563 [2024-12-09 10:48:52.211769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.563 [2024-12-09 10:48:52.211847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.563 [2024-12-09 10:48:52.211891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.225452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.225531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.225576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.238070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.238150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.238196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.249961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.250036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.250079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.262030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.262105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.262148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.273940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.274015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.274057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.286025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.286099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.286142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.298939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.299017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.312420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.312499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.312544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.325502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.325580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.325624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.338928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.339004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.339048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.352235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.352314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.352359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.366047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.366125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.366169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.379482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.379560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.379605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.391709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.391800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.391844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.403848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.403935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.403979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.415793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.415868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.823 [2024-12-09 10:48:52.415912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.823 [2024-12-09 10:48:52.427852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.823 [2024-12-09 10:48:52.427926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.824 [2024-12-09 10:48:52.427969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.824 [2024-12-09 10:48:52.439836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.824 [2024-12-09 10:48:52.439912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.824 [2024-12-09 10:48:52.439957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.824 [2024-12-09 10:48:52.451921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.824 [2024-12-09 10:48:52.452005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.824 [2024-12-09 10:48:52.452049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.824 [2024-12-09 10:48:52.463875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.824 [2024-12-09 10:48:52.463950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.824 [2024-12-09 10:48:52.463992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.824 [2024-12-09 10:48:52.475867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:07.824 [2024-12-09 10:48:52.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.824 [2024-12-09 10:48:52.475985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.488492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.488568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.500526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.500602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.500647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.084 2484.00 IOPS, 310.50 MiB/s [2024-12-09T09:48:52.738Z] [2024-12-09 10:48:52.516691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.516792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.516854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.528997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.529075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.529119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.540880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.540955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.552627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.552715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.552790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.564816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.564891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.564935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.576947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.577024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.577067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.589072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.589149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.589194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.601341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.601417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.601461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.614712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.614805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.614849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.626962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.627059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.627105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.639021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.639097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.639140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.651050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.084 [2024-12-09 10:48:52.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.084 [2024-12-09 10:48:52.651168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.084 [2024-12-09 10:48:52.662923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.662997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.663040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.675043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.675140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.687118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.687194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.687237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.699841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.699921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.699966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.712745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.712823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.712870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.721216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.721292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.721334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.085 [2024-12-09 10:48:52.731092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.085 [2024-12-09 10:48:52.731170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.085 [2024-12-09 10:48:52.731213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.743009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.743085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.743129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.751702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.751791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.751835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.761606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.761679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.761739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.773369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.773445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.773489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.781692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.781780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.781825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.791759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.791833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.791877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.803882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.803961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.804005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.815812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.815889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.815947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.823122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.823196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.823239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.834695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.834793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.834838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.846536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.846612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.846656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.858375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.858448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.858491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.870520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.870594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.870639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.882404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.882478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.882522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.893940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.893975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.893996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.905819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.905855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.905876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.917688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.917800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.917822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.929494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.929571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.929615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.940944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.941019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.941064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.952222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.952299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.952342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.963968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.964048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.964091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.975428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.975503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.975552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.345 [2024-12-09 10:48:52.986662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.345 [2024-12-09 10:48:52.986760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.345 [2024-12-09 10:48:52.986820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:52.999261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:52.999339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:52.999383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.009692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.009791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.009813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.021498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.021574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.021619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.033635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.033711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.046507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.046584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.058444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.058524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.058570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.070876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.070952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.070996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.083531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.083610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.083655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.095115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.095191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.095236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.106845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.106923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.106966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.117939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.118014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.118074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.129220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.129296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.129341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.141164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.141250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.141295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.154622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.154698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.164460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.164535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.164577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.176165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.176244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.176289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.606 [2024-12-09 10:48:53.187655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.606 [2024-12-09 10:48:53.187747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.606 [2024-12-09 10:48:53.187811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.198955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.199028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.199071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.209910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.209983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.210025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.220930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.220965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.220985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.232459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.232534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.232577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.243953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.244027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.244071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.607 [2024-12-09 10:48:53.255147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.607 [2024-12-09 10:48:53.255223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.607 [2024-12-09 10:48:53.255267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.266903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.266937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.266958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.278648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.278739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.278784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.289855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.289929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.289974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.301041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.301115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.301158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.310173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.310250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.310310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.321547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.321623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.321666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.332841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.332918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.332962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.344619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.344696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.344761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.356684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.356776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.356822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.368359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.368435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.368480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.868 [2024-12-09 10:48:53.380254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.868 [2024-12-09 10:48:53.380331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.868 [2024-12-09 10:48:53.380375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.392150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.392223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.392266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.404087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.404162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.404206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.416023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.416114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.416160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.428089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.428164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.428209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.439921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.440041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.451944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.452018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.452061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.463565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.463639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.463681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.475547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.475621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.475665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.487669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.487758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.487805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:08.869 [2024-12-09 10:48:53.499794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.499869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.499912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:08.869 2581.00 IOPS, 322.62 MiB/s [2024-12-09T09:48:53.523Z] [2024-12-09 10:48:53.514982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xabd620) 00:38:08.869 [2024-12-09 10:48:53.515058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:08.869 [2024-12-09 10:48:53.515101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:08.869 00:38:08.869 Latency(us) 00:38:08.869 [2024-12-09T09:48:53.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.869 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:08.869 nvme0n1 : 2.01 2577.63 322.20 0.00 0.00 6194.05 1868.99 16311.18 00:38:08.869 [2024-12-09T09:48:53.523Z] =================================================================================================================== 00:38:08.869 [2024-12-09T09:48:53.523Z] Total : 2577.63 322.20 0.00 0.00 6194.05 1868.99 16311.18 00:38:09.130 { 00:38:09.130 "results": [ 00:38:09.130 { 00:38:09.130 "job": "nvme0n1", 00:38:09.130 "core_mask": "0x2", 00:38:09.130 "workload": "randread", 00:38:09.130 "status": "finished", 00:38:09.130 "queue_depth": 16, 00:38:09.130 "io_size": 131072, 00:38:09.130 "runtime": 2.008821, 00:38:09.130 "iops": 2577.631356900391, 00:38:09.130 "mibps": 322.2039196125489, 00:38:09.130 "io_failed": 0, 00:38:09.130 "io_timeout": 0, 00:38:09.130 "avg_latency_us": 6194.046401441998, 00:38:09.130 "min_latency_us": 1868.9896296296297, 00:38:09.130 "max_latency_us": 16311.182222222222 00:38:09.130 } 00:38:09.130 ], 00:38:09.130 "core_count": 1 00:38:09.130 } 00:38:09.130 10:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:09.130 10:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:09.130 10:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:09.130 10:48:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:09.130 | .driver_specific 00:38:09.130 | .nvme_error 00:38:09.130 | .status_code 00:38:09.130 | .command_transient_transport_error' 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2237598 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2237598 ']' 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2237598 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237598 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237598' 00:38:09.700 killing process with pid 2237598 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2237598 00:38:09.700 Received shutdown signal, test time was about 2.000000 seconds 00:38:09.700 00:38:09.700 Latency(us) 00:38:09.700 [2024-12-09T09:48:54.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.700 [2024-12-09T09:48:54.354Z] =================================================================================================================== 00:38:09.700 [2024-12-09T09:48:54.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:09.700 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2237598 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2238136 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2238136 /var/tmp/bperf.sock 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2238136 ']' 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:09.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.960 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:09.960 [2024-12-09 10:48:54.461238] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:09.960 [2024-12-09 10:48:54.461340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238136 ] 00:38:09.960 [2024-12-09 10:48:54.537855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.960 [2024-12-09 10:48:54.596244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.219 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.219 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:38:10.219 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:10.219 10:48:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:10.787 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:11.045 nvme0n1 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:11.045 10:48:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:11.306 Running I/O for 2 seconds... 00:38:11.306 [2024-12-09 10:48:55.790677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eee190 00:38:11.306 [2024-12-09 10:48:55.791821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.791879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:11.306 [2024-12-09 10:48:55.816232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eee5c8 00:38:11.306 [2024-12-09 10:48:55.819002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.819080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:11.306 [2024-12-09 10:48:55.847185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eed0b0 00:38:11.306 [2024-12-09 10:48:55.850322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.850408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:11.306 [2024-12-09 10:48:55.876991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee1f80 00:38:11.306 [2024-12-09 10:48:55.878945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.879022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:11.306 [2024-12-09 10:48:55.912367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee84c0 00:38:11.306 [2024-12-09 10:48:55.917485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:11.306 [2024-12-09 10:48:55.933568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee1b48 00:38:11.306 [2024-12-09 10:48:55.935568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.306 [2024-12-09 10:48:55.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:55.968184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef9f68 00:38:11.568 [2024-12-09 10:48:55.971934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:55.971967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:55.984845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efc560 00:38:11.568 [2024-12-09 10:48:55.986309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:55.986383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.012183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee5a90 00:38:11.568 [2024-12-09 10:48:56.014570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.014666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.040160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efac10 00:38:11.568 [2024-12-09 10:48:56.042092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.042165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.075581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eeea00 00:38:11.568 [2024-12-09 10:48:56.079903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.079979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.103679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee5220 00:38:11.568 [2024-12-09 10:48:56.107671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.107762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.132559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef0350 00:38:11.568 [2024-12-09 10:48:56.136531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.136605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.159299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eebb98 00:38:11.568 [2024-12-09 10:48:56.163416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.163489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.190349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef35f0 00:38:11.568 [2024-12-09 10:48:56.192863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.192936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:11.568 [2024-12-09 10:48:56.218239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee73e0 00:38:11.568 [2024-12-09 10:48:56.219983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.568 [2024-12-09 10:48:56.220069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.245942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef4f40 00:38:11.830 [2024-12-09 10:48:56.247684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.247797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.282449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efd208 00:38:11.830 [2024-12-09 10:48:56.287301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.303302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee3498 00:38:11.830 [2024-12-09 10:48:56.304641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.304688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.332916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee38d0 00:38:11.830 [2024-12-09 10:48:56.336672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.336776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.355949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eef270 00:38:11.830 [2024-12-09 10:48:56.359558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.359636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.379922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eea680 00:38:11.830 [2024-12-09 10:48:56.382937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.382982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.403657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efac10 00:38:11.830 [2024-12-09 10:48:56.406718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.406812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.426470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ede038 00:38:11.830 [2024-12-09 10:48:56.429033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.429107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.450683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee12d8 00:38:11.830 [2024-12-09 10:48:56.453420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:11.830 [2024-12-09 10:48:56.453493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:11.830 [2024-12-09 10:48:56.483859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef0350 00:38:12.145 [2024-12-09 10:48:56.486840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.486874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.499822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef0350 00:38:12.145 [2024-12-09 10:48:56.501647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.501735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.523867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee7c50 00:38:12.145 [2024-12-09 10:48:56.525469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.525542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.557477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efc998 00:38:12.145 [2024-12-09 10:48:56.559576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.559650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.583631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee99d8 00:38:12.145 [2024-12-09 10:48:56.586260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.586332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.612532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efd208 00:38:12.145 [2024-12-09 10:48:56.614829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.614903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.643423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee0ea0 00:38:12.145 [2024-12-09 10:48:56.646530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.646605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.679193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eed4e8 00:38:12.145 [2024-12-09 10:48:56.683944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.684016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.700346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eef6a8 00:38:12.145 [2024-12-09 10:48:56.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.702817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.736107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016edf988 00:38:12.145 [2024-12-09 10:48:56.740123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.740197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:12.145 [2024-12-09 10:48:56.762828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eeaab8 00:38:12.145 [2024-12-09 10:48:56.766991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.767064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:12.145 9235.00 IOPS, 36.07 MiB/s [2024-12-09T09:48:56.799Z] [2024-12-09 10:48:56.793095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef7970 00:38:12.145 [2024-12-09 10:48:56.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.145 [2024-12-09 10:48:56.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:12.406 [2024-12-09 10:48:56.823013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eef6a8 00:38:12.406 [2024-12-09 10:48:56.825460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.406 [2024-12-09 10:48:56.825542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:12.406 [2024-12-09 10:48:56.852546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efd208 00:38:12.407 [2024-12-09 10:48:56.854985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.855058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:56.879415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee38d0 00:38:12.407 [2024-12-09 10:48:56.882194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.882267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:56.908418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efcdd0 00:38:12.407 [2024-12-09 10:48:56.910886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.910958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:56.939794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef2510 00:38:12.407 [2024-12-09 10:48:56.943053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.943135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:56.969495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efdeb0 00:38:12.407 [2024-12-09 10:48:56.971553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.971639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:56.997183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee3498 00:38:12.407 [2024-12-09 10:48:56.999007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:56.999082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:57.033664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eef6a8 00:38:12.407 [2024-12-09 10:48:57.038515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:57.038590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:12.407 [2024-12-09 10:48:57.054469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef7970 00:38:12.407 [2024-12-09 10:48:57.056573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.407 [2024-12-09 10:48:57.056643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:12.668 [2024-12-09 10:48:57.083664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016edf550 00:38:12.668 [2024-12-09 10:48:57.085994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.668 [2024-12-09 10:48:57.086071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:12.668 [2024-12-09 10:48:57.114353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee4de8 00:38:12.668 [2024-12-09 10:48:57.117401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.668 [2024-12-09 10:48:57.117477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:12.668 [2024-12-09 10:48:57.137068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016edf118 00:38:12.668 [2024-12-09 10:48:57.139862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.668 [2024-12-09 10:48:57.139896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:12.668 [2024-12-09 10:48:57.162383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef6020 00:38:12.668 [2024-12-09 10:48:57.165361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.668 [2024-12-09 10:48:57.165434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:12.668 [2024-12-09 10:48:57.191674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee1f80 00:38:12.668 [2024-12-09 10:48:57.194513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.194585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:12.669 [2024-12-09 10:48:57.220442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efef90 00:38:12.669 [2024-12-09 10:48:57.224325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:12.669 [2024-12-09 10:48:57.237251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efbcf0 00:38:12.669 [2024-12-09 10:48:57.239327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.239409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:12.669 [2024-12-09 10:48:57.265958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef6cc8 00:38:12.669 [2024-12-09 10:48:57.269386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.269462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:12.669 [2024-12-09 10:48:57.283439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee84c0 00:38:12.669 [2024-12-09 10:48:57.285053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.285124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:12.669 [2024-12-09 10:48:57.311981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef1ca0 00:38:12.669 [2024-12-09 10:48:57.314964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.669 [2024-12-09 10:48:57.315047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.339473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016edf550 00:38:12.929 [2024-12-09 10:48:57.343011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.343086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.366771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef5be8 00:38:12.929 [2024-12-09 10:48:57.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.369948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.395788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef2948 00:38:12.929 [2024-12-09 10:48:57.398990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.399064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.425628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef1868 00:38:12.929 [2024-12-09 10:48:57.428811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.454400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef57b0 00:38:12.929 [2024-12-09 10:48:57.456395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.456470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.489495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ede038 00:38:12.929 [2024-12-09 10:48:57.494298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.510292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef3e60 00:38:12.929 [2024-12-09 10:48:57.512342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.512420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.539459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef57b0 00:38:12.929 [2024-12-09 10:48:57.541648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.541738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:12.929 [2024-12-09 10:48:57.570348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef4b08 00:38:12.929 [2024-12-09 10:48:57.573293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:12.929 [2024-12-09 10:48:57.573366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.589195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef4f40 00:38:13.190 [2024-12-09 10:48:57.591853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.591886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.609403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eeee38 00:38:13.190 [2024-12-09 10:48:57.611958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.612001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.631330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee7818 00:38:13.190 [2024-12-09 10:48:57.634591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.634672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.653269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ef3e60 00:38:13.190 [2024-12-09 10:48:57.656744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.656807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.683436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016efd640 00:38:13.190 [2024-12-09 10:48:57.685707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.685804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.711222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee01f8 00:38:13.190 [2024-12-09 10:48:57.713271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.713342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.746254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016eeff18 00:38:13.190 [2024-12-09 10:48:57.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.751167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:13.190 [2024-12-09 10:48:57.767074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71b20) with pdu=0x200016ee0630 00:38:13.190 [2024-12-09 10:48:57.769173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:13.190 [2024-12-09 10:48:57.769242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:13.190 9311.50 IOPS, 36.37 MiB/s 00:38:13.190 Latency(us) 00:38:13.190 [2024-12-09T09:48:57.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.190 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.190 nvme0n1 : 2.02 9325.62 36.43 0.00 0.00 13698.31 3568.07 37671.06 00:38:13.190 [2024-12-09T09:48:57.844Z] =================================================================================================================== 00:38:13.190 [2024-12-09T09:48:57.844Z] Total : 9325.62 36.43 0.00 0.00 13698.31 3568.07 37671.06 00:38:13.190 { 00:38:13.190 "results": [ 00:38:13.190 { 00:38:13.190 "job": "nvme0n1", 00:38:13.190 "core_mask": "0x2", 00:38:13.190 "workload": "randwrite", 00:38:13.190 "status": "finished", 00:38:13.190 "queue_depth": 128, 00:38:13.190 "io_size": 4096, 00:38:13.190 "runtime": 2.022601, 00:38:13.190 "iops": 9325.615877773223, 00:38:13.190 "mibps": 36.428187022551654, 00:38:13.190 "io_failed": 0, 00:38:13.190 "io_timeout": 0, 00:38:13.190 "avg_latency_us": 13698.31329649658, 00:38:13.190 "min_latency_us": 3568.071111111111, 00:38:13.190 "max_latency_us": 37671.0637037037 00:38:13.190 } 00:38:13.190 ], 00:38:13.190 "core_count": 1 00:38:13.190 } 00:38:13.190 10:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:13.190 10:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:13.190 10:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:13.190 | .driver_specific 00:38:13.190 | .nvme_error 00:38:13.190 | .status_code 00:38:13.190 | .command_transient_transport_error' 00:38:13.190 10:48:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:13.760 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 73 > 0 )) 00:38:13.760 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2238136 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2238136 ']' 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2238136 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238136 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238136' 00:38:13.761 killing process with pid 2238136 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2238136 00:38:13.761 Received shutdown signal, test time was about 2.000000 seconds 00:38:13.761 00:38:13.761 Latency(us) 00:38:13.761 [2024-12-09T09:48:58.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.761 [2024-12-09T09:48:58.415Z] =================================================================================================================== 00:38:13.761 [2024-12-09T09:48:58.415Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:13.761 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2238136 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2238547 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2238547 /var/tmp/bperf.sock 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2238547 ']' 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:14.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.331 10:48:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:14.331 [2024-12-09 10:48:58.732632] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:14.331 [2024-12-09 10:48:58.732736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238547 ] 00:38:14.331 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:14.331 Zero copy mechanism will not be used. 00:38:14.331 [2024-12-09 10:48:58.858344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.331 [2024-12-09 10:48:58.976633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.268 10:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.268 10:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:38:15.268 10:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.268 10:48:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:15.529 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:16.468 nvme0n1 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:16.468 10:49:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:16.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:16.468 Zero copy mechanism will not be used. 00:38:16.468 Running I/O for 2 seconds... 00:38:16.468 [2024-12-09 10:49:01.064546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.468 [2024-12-09 10:49:01.064824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.468 [2024-12-09 10:49:01.064868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.468 [2024-12-09 10:49:01.076342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.468 [2024-12-09 10:49:01.076570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.469 [2024-12-09 10:49:01.076648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.469 [2024-12-09 10:49:01.087959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.469 [2024-12-09 10:49:01.088181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.469 [2024-12-09 10:49:01.088253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.469 [2024-12-09 10:49:01.101544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.469 [2024-12-09 10:49:01.101815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.469 [2024-12-09 10:49:01.101892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.469 [2024-12-09 10:49:01.114846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.469 [2024-12-09 10:49:01.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.469 [2024-12-09 10:49:01.115186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.127881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.128142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.128215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.141144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.141407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.141482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.154346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.154568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.154643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.167616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.167846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.167922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.180933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.181254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.181327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.194190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.194384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.194457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.207238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.207436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.207508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.220806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.221123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.221196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.234229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.234578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.234650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.247895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.248069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.248140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.261496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.261686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.261774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.275242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.275506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.275580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.288815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.289031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.289102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.302628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.302876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.302950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.316430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.316714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.328601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.328803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.328844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.342072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.342264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.342333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.355912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.356152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.356228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.369586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.369845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.369920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.731 [2024-12-09 10:49:01.383324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.731 [2024-12-09 10:49:01.383551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.731 [2024-12-09 10:49:01.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.396549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.396836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.410221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.410461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.410534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.423971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.424190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.437475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.437702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.437792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.450686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.450934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.451006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.463898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.464107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.464179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.477116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.477313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.477384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.490273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.490481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.490554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.503299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.503485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.503557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.516431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.516598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.516670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.530045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.530255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.994 [2024-12-09 10:49:01.530324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.994 [2024-12-09 10:49:01.543607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.994 [2024-12-09 10:49:01.543778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.543850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.557107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.557291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.557361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.570699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.570924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.570994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.582860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.583105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.583182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.596427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.596645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.596717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.610063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.610266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.610333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.623620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.623856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.623929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:16.995 [2024-12-09 10:49:01.637237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:16.995 [2024-12-09 10:49:01.637438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:16.995 [2024-12-09 10:49:01.637506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.650888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.255 [2024-12-09 10:49:01.651168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.664535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.664753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.255 [2024-12-09 10:49:01.664824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.678086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.678272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.255 [2024-12-09 10:49:01.678363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.691787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.691975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.255 [2024-12-09 10:49:01.692044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.705356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.705564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.255 [2024-12-09 10:49:01.705633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.255 [2024-12-09 10:49:01.719469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.255 [2024-12-09 10:49:01.719709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.719807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.733373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.733573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.733643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.747036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.747221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.747291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.760652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.760858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.760927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.774225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.774448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.774521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.788010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.788245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.788328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.801145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.801388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.801461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.814218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.814443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.814514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.827378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.827609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.827684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.838586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.838831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.838907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.852168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.852452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.852524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.865786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.866011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.866083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.879035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.879272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.879345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.892338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.892568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.892641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.256 [2024-12-09 10:49:01.905266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.256 [2024-12-09 10:49:01.905495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.256 [2024-12-09 10:49:01.905567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.918868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.919117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.919193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.933017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.933252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.933326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.946949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.947188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.947261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.960517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.960744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.960818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.974292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.974510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.974582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:01.988097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:01.988320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:01.988393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.002175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.002407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.002477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.015868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.016146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.016219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.029695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.029952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.030037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.043547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.043779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.043847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.057319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.057562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.057635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.518 2305.00 IOPS, 288.12 MiB/s [2024-12-09T09:49:02.172Z] [2024-12-09 10:49:02.073545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.073802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.073872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.087164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.087412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.099079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.099323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.099398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.112836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.113074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.113148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.126538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.126794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.126868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.140395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.140621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.154194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.154445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.154520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.518 [2024-12-09 10:49:02.167894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.518 [2024-12-09 10:49:02.168137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.518 [2024-12-09 10:49:02.168211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.780 [2024-12-09 10:49:02.181741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.780 [2024-12-09 10:49:02.181982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.780 [2024-12-09 10:49:02.182054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.780 [2024-12-09 10:49:02.195163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.780 [2024-12-09 10:49:02.195373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.195441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.205923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.206126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.206198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.217855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.218113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.229355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.229797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.240867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.241086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.241163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.253537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.253698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.253738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.266454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.266670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.266763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.279875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.280180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.280250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.290808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.290987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.291067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.301569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.301805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.301839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.312591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.312895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.312928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.323699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.323962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.324040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.334950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.335167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.335250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.344997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.345232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.345315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.356240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.356563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.367063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.367346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.367418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.378466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.378654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.378744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.389855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.390088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.390161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.402869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.403020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.403097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.416113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.416399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.416471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:17.781 [2024-12-09 10:49:02.429778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:17.781 [2024-12-09 10:49:02.430041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:17.781 [2024-12-09 10:49:02.430112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.043 [2024-12-09 10:49:02.443502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.443840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.443914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.457212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.457561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.457636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.470955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.471201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.471273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.484248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.484555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.484628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.497450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.497649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.497716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.510344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.510620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.523278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.523489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.523560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.536246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.536581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.536653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.549416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.549684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.549775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.562381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.562778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.562853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.575537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.575759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.575829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.588766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.588901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.588932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.601534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.601831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.601903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.613991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.614228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.627010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.627330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.627405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.640161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.653012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.653280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.653352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.666161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.666378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.666450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.679193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.679404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.679476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.044 [2024-12-09 10:49:02.691937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.044 [2024-12-09 10:49:02.692093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.044 [2024-12-09 10:49:02.692174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.704885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.705166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.705238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.717897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.718185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.718258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.731020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.731315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.731388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.743942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.744216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.756809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.757100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.757173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.769795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.770083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.770156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.782857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.783145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.783217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.796160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.796466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.796538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.809183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.809504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.822171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.822466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.822539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.835180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.835393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.835465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.848422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.848605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.848673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.861439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.861737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.861812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.873289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.873505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.873574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.886272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.886535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.886607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.899137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.899370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.899444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.911844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.912121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.912193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.924519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.924787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.924821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.937415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.937637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.937711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.306 [2024-12-09 10:49:02.950795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.306 [2024-12-09 10:49:02.951068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.306 [2024-12-09 10:49:02.951141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:02.963432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:02.963744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:02.963807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:02.976050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:02.976295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:02.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:02.988691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:02.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:02.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:03.001475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.001806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.001879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:03.014378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.014580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:03.027831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.027995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.028059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:03.040837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.040995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.041049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:18.567 [2024-12-09 10:49:03.053498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.053907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:18.567 2369.50 IOPS, 296.19 MiB/s [2024-12-09T09:49:03.221Z] [2024-12-09 10:49:03.068465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa71e60) with pdu=0x200016eff3c8 00:38:18.567 [2024-12-09 10:49:03.068788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:18.567 [2024-12-09 10:49:03.068822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:18.567 00:38:18.567 Latency(us) 00:38:18.567 [2024-12-09T09:49:03.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.567 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:18.567 nvme0n1 : 2.01 2367.01 295.88 0.00 0.00 6736.28 4805.97 16214.09 00:38:18.567 [2024-12-09T09:49:03.221Z] =================================================================================================================== 00:38:18.567 [2024-12-09T09:49:03.221Z] Total : 2367.01 295.88 0.00 0.00 6736.28 4805.97 16214.09 00:38:18.567 { 00:38:18.567 "results": [ 00:38:18.567 { 00:38:18.567 "job": "nvme0n1", 00:38:18.567 "core_mask": "0x2", 00:38:18.567 "workload": "randwrite", 00:38:18.567 "status": "finished", 00:38:18.567 "queue_depth": 16, 00:38:18.567 "io_size": 131072, 00:38:18.567 "runtime": 2.00886, 00:38:18.567 "iops": 2367.014127415549, 00:38:18.567 "mibps": 295.8767659269436, 00:38:18.567 "io_failed": 0, 00:38:18.567 "io_timeout": 0, 00:38:18.567 "avg_latency_us": 6736.282525217121, 00:38:18.567 "min_latency_us": 4805.973333333333, 00:38:18.567 "max_latency_us": 16214.091851851852 00:38:18.567 } 00:38:18.567 ], 00:38:18.567 "core_count": 1 00:38:18.567 } 00:38:18.567 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:18.567 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:18.567 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:18.567 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:18.567 | .driver_specific 00:38:18.567 | .nvme_error 00:38:18.567 | .status_code 00:38:18.567 | .command_transient_transport_error' 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2238547 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2238547 ']' 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2238547 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238547 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238547' 00:38:19.137 killing process with pid 2238547 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2238547 00:38:19.137 Received shutdown signal, test time was about 2.000000 seconds 00:38:19.137 00:38:19.137 Latency(us) 00:38:19.137 [2024-12-09T09:49:03.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.137 [2024-12-09T09:49:03.791Z] =================================================================================================================== 00:38:19.137 [2024-12-09T09:49:03.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:19.137 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2238547 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2236782 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2236782 ']' 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2236782 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236782 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236782' 00:38:19.397 killing process with pid 2236782 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2236782 00:38:19.397 10:49:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2236782 00:38:19.657 00:38:19.657 real 0m21.436s 00:38:19.657 user 0m45.661s 00:38:19.657 sys 0m5.543s 00:38:19.657 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.657 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:19.657 ************************************ 00:38:19.657 END TEST nvmf_digest_error 00:38:19.658 ************************************ 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:19.658 rmmod nvme_tcp 00:38:19.658 rmmod nvme_fabrics 00:38:19.658 rmmod nvme_keyring 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2236782 ']' 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2236782 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2236782 ']' 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2236782 00:38:19.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2236782) - No such process 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2236782 is not found' 00:38:19.658 Process with pid 2236782 is not found 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.658 10:49:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:22.206 00:38:22.206 real 0m47.334s 00:38:22.206 user 1m29.493s 00:38:22.206 sys 0m13.506s 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:22.206 ************************************ 00:38:22.206 END TEST nvmf_digest 00:38:22.206 ************************************ 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.206 ************************************ 00:38:22.206 START TEST nvmf_bdevperf 00:38:22.206 ************************************ 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:22.206 * Looking for test storage... 00:38:22.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.206 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:22.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.206 --rc genhtml_branch_coverage=1 00:38:22.206 --rc genhtml_function_coverage=1 00:38:22.206 --rc genhtml_legend=1 00:38:22.207 --rc geninfo_all_blocks=1 00:38:22.207 --rc geninfo_unexecuted_blocks=1 00:38:22.207 00:38:22.207 ' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.207 --rc genhtml_branch_coverage=1 00:38:22.207 --rc genhtml_function_coverage=1 00:38:22.207 --rc genhtml_legend=1 00:38:22.207 --rc geninfo_all_blocks=1 00:38:22.207 --rc geninfo_unexecuted_blocks=1 00:38:22.207 00:38:22.207 ' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.207 --rc genhtml_branch_coverage=1 00:38:22.207 --rc genhtml_function_coverage=1 00:38:22.207 --rc genhtml_legend=1 00:38:22.207 --rc geninfo_all_blocks=1 00:38:22.207 --rc geninfo_unexecuted_blocks=1 00:38:22.207 00:38:22.207 ' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.207 --rc genhtml_branch_coverage=1 00:38:22.207 --rc genhtml_function_coverage=1 00:38:22.207 --rc genhtml_legend=1 00:38:22.207 --rc geninfo_all_blocks=1 00:38:22.207 --rc geninfo_unexecuted_blocks=1 00:38:22.207 00:38:22.207 ' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:22.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:38:22.207 10:49:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:25.539 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:25.539 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.539 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:25.540 Found net devices under 0000:84:00.0: cvl_0_0 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:25.540 Found net devices under 0000:84:00.1: cvl_0_1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:25.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:38:25.540 00:38:25.540 --- 10.0.0.2 ping statistics --- 00:38:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.540 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:38:25.540 00:38:25.540 --- 10.0.0.1 ping statistics --- 00:38:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.540 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2241301 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2241301 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2241301 ']' 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.540 10:49:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.540 [2024-12-09 10:49:09.930456] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:25.540 [2024-12-09 10:49:09.930633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.540 [2024-12-09 10:49:10.066438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:25.540 [2024-12-09 10:49:10.182702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.540 [2024-12-09 10:49:10.182838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.540 [2024-12-09 10:49:10.182875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.540 [2024-12-09 10:49:10.182915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.540 [2024-12-09 10:49:10.182930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.540 [2024-12-09 10:49:10.185781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.540 [2024-12-09 10:49:10.185836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.540 [2024-12-09 10:49:10.185840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 [2024-12-09 10:49:10.346515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 Malloc0 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:25.799 [2024-12-09 10:49:10.412716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:25.799 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:25.800 { 00:38:25.800 "params": { 00:38:25.800 "name": "Nvme$subsystem", 00:38:25.800 "trtype": "$TEST_TRANSPORT", 00:38:25.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.800 "adrfam": "ipv4", 00:38:25.800 "trsvcid": "$NVMF_PORT", 00:38:25.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.800 "hdgst": ${hdgst:-false}, 00:38:25.800 "ddgst": ${ddgst:-false} 00:38:25.800 }, 00:38:25.800 "method": "bdev_nvme_attach_controller" 00:38:25.800 } 00:38:25.800 EOF 00:38:25.800 )") 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:25.800 10:49:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:25.800 "params": { 00:38:25.800 "name": "Nvme1", 00:38:25.800 "trtype": "tcp", 00:38:25.800 "traddr": "10.0.0.2", 00:38:25.800 "adrfam": "ipv4", 00:38:25.800 "trsvcid": "4420", 00:38:25.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.800 "hdgst": false, 00:38:25.800 "ddgst": false 00:38:25.800 }, 00:38:25.800 "method": "bdev_nvme_attach_controller" 00:38:25.800 }' 00:38:26.058 [2024-12-09 10:49:10.488084] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:26.058 [2024-12-09 10:49:10.488253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241335 ] 00:38:26.058 [2024-12-09 10:49:10.600983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.058 [2024-12-09 10:49:10.663521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.625 Running I/O for 1 seconds... 00:38:27.564 8651.00 IOPS, 33.79 MiB/s 00:38:27.564 Latency(us) 00:38:27.564 [2024-12-09T09:49:12.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.564 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:27.564 Verification LBA range: start 0x0 length 0x4000 00:38:27.564 Nvme1n1 : 1.05 8378.43 32.73 0.00 0.00 14633.73 3179.71 45244.11 00:38:27.564 [2024-12-09T09:49:12.218Z] =================================================================================================================== 00:38:27.564 [2024-12-09T09:49:12.218Z] Total : 8378.43 32.73 0.00 0.00 14633.73 3179.71 45244.11 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2241589 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:27.823 { 00:38:27.823 "params": { 00:38:27.823 "name": "Nvme$subsystem", 00:38:27.823 "trtype": "$TEST_TRANSPORT", 00:38:27.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:27.823 "adrfam": "ipv4", 00:38:27.823 "trsvcid": "$NVMF_PORT", 00:38:27.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:27.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:27.823 "hdgst": ${hdgst:-false}, 00:38:27.823 "ddgst": ${ddgst:-false} 00:38:27.823 }, 00:38:27.823 "method": "bdev_nvme_attach_controller" 00:38:27.823 } 00:38:27.823 EOF 00:38:27.823 )") 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:27.823 10:49:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:27.823 "params": { 00:38:27.823 "name": "Nvme1", 00:38:27.823 "trtype": "tcp", 00:38:27.823 "traddr": "10.0.0.2", 00:38:27.823 "adrfam": "ipv4", 00:38:27.823 "trsvcid": "4420", 00:38:27.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:27.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:27.823 "hdgst": false, 00:38:27.823 "ddgst": false 00:38:27.823 }, 00:38:27.823 "method": "bdev_nvme_attach_controller" 00:38:27.823 }' 00:38:27.823 [2024-12-09 10:49:12.346363] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:27.823 [2024-12-09 10:49:12.346540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241589 ] 00:38:27.823 [2024-12-09 10:49:12.456290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.083 [2024-12-09 10:49:12.514191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.083 Running I/O for 15 seconds... 00:38:30.401 8617.00 IOPS, 33.66 MiB/s [2024-12-09T09:49:15.325Z] 8750.00 IOPS, 34.18 MiB/s [2024-12-09T09:49:15.325Z] 10:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2241301 00:38:30.671 10:49:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:30.671 [2024-12-09 10:49:15.277579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.277962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.277989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.278949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.278969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.671 [2024-12-09 10:49:15.278985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.671 [2024-12-09 10:49:15.279533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.671 [2024-12-09 10:49:15.279570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.279953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.279969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.280963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.280979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.281969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.281995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.672 [2024-12-09 10:49:15.282718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.672 [2024-12-09 10:49:15.282793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.282964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.282980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.283831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.283862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.283895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.283925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.283955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.283970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.283996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.284041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:30.673 [2024-12-09 10:49:15.284070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.284972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.284988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.673 [2024-12-09 10:49:15.285535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86e550 is same with the state(6) to be set 00:38:30.673 [2024-12-09 10:49:15.285613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:30.673 [2024-12-09 10:49:15.285643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:30.673 [2024-12-09 10:49:15.285672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48880 len:8 PRP1 0x0 PRP2 0x0 00:38:30.673 [2024-12-09 10:49:15.285705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:30.673 [2024-12-09 10:49:15.285924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:30.673 [2024-12-09 10:49:15.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:30.673 [2024-12-09 10:49:15.285981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.285996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:30.673 [2024-12-09 10:49:15.286009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:30.673 [2024-12-09 10:49:15.286041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.674 [2024-12-09 10:49:15.290413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.674 [2024-12-09 10:49:15.290449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.674 [2024-12-09 10:49:15.291057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.674 [2024-12-09 10:49:15.291099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.674 [2024-12-09 10:49:15.291114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.674 [2024-12-09 10:49:15.291308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.674 [2024-12-09 10:49:15.291505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.674 [2024-12-09 10:49:15.291528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.674 [2024-12-09 10:49:15.291546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.674 [2024-12-09 10:49:15.291560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.674 [2024-12-09 10:49:15.307282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.674 [2024-12-09 10:49:15.308011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.674 [2024-12-09 10:49:15.308105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.674 [2024-12-09 10:49:15.308148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.674 [2024-12-09 10:49:15.308754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.674 [2024-12-09 10:49:15.309008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.674 [2024-12-09 10:49:15.309045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.674 [2024-12-09 10:49:15.309059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.674 [2024-12-09 10:49:15.309072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.324846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.325422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.325496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.325537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.936 [2024-12-09 10:49:15.325930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.936 [2024-12-09 10:49:15.326385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.936 [2024-12-09 10:49:15.326440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.936 [2024-12-09 10:49:15.326474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.936 [2024-12-09 10:49:15.326508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.340934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.341652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.341750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.341800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.936 [2024-12-09 10:49:15.342017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.936 [2024-12-09 10:49:15.342567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.936 [2024-12-09 10:49:15.342620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.936 [2024-12-09 10:49:15.342655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.936 [2024-12-09 10:49:15.342702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.357437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.357819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.357849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.357866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.936 [2024-12-09 10:49:15.358147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.936 [2024-12-09 10:49:15.358701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.936 [2024-12-09 10:49:15.358774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.936 [2024-12-09 10:49:15.358809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.936 [2024-12-09 10:49:15.358824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.373647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.374185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.374259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.374301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.936 [2024-12-09 10:49:15.374828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.936 [2024-12-09 10:49:15.375064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.936 [2024-12-09 10:49:15.375087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.936 [2024-12-09 10:49:15.375115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.936 [2024-12-09 10:49:15.375129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.390107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.390854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.390884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.390900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.936 [2024-12-09 10:49:15.391218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.936 [2024-12-09 10:49:15.391805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.936 [2024-12-09 10:49:15.391828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.936 [2024-12-09 10:49:15.391844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.936 [2024-12-09 10:49:15.391859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.936 [2024-12-09 10:49:15.406601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.936 [2024-12-09 10:49:15.407161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.936 [2024-12-09 10:49:15.407193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.936 [2024-12-09 10:49:15.407209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.407436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.407873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.407897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.407911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.407925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.422458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.423098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.423171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.423213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.423801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.424024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.424096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.424131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.424167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.439191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.439975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.440005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.440022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.440535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.440925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.440950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.440965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.440979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.455562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.455987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.456072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.456628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.456962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.456987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.457002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.457016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.472210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.472967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.472983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.473467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.473898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.473923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.473937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.473951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.491235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.491982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.492063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.492104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.492647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.493224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.493280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.493316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.493350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.510303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.511184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.511256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.511298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.511874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.512432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.512500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.512537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.512572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.529467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.530319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.530392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.530433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.531006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.531573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.531627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.531661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.531695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.545302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.546111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.546145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.546165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.546407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.546660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.546687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.546707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.546736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.564397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.565317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.565391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.937 [2024-12-09 10:49:15.565432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.937 [2024-12-09 10:49:15.565997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.937 [2024-12-09 10:49:15.566554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.937 [2024-12-09 10:49:15.566609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.937 [2024-12-09 10:49:15.566643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.937 [2024-12-09 10:49:15.566677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:30.937 [2024-12-09 10:49:15.583563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:30.937 [2024-12-09 10:49:15.584439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.937 [2024-12-09 10:49:15.584511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:30.938 [2024-12-09 10:49:15.584553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:30.938 [2024-12-09 10:49:15.585123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:30.938 [2024-12-09 10:49:15.585678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:30.938 [2024-12-09 10:49:15.585753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:30.938 [2024-12-09 10:49:15.585796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:30.938 [2024-12-09 10:49:15.585831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.200 [2024-12-09 10:49:15.602674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.200 [2024-12-09 10:49:15.603514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.200 [2024-12-09 10:49:15.603587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.200 [2024-12-09 10:49:15.603629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.200 [2024-12-09 10:49:15.604200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.200 [2024-12-09 10:49:15.604780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.200 [2024-12-09 10:49:15.604836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.200 [2024-12-09 10:49:15.604871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.200 [2024-12-09 10:49:15.604906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.200 [2024-12-09 10:49:15.621860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.200 [2024-12-09 10:49:15.622709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.200 [2024-12-09 10:49:15.622798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.200 [2024-12-09 10:49:15.622840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.200 [2024-12-09 10:49:15.623382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.200 [2024-12-09 10:49:15.623961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.200 [2024-12-09 10:49:15.624016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.200 [2024-12-09 10:49:15.624061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.200 [2024-12-09 10:49:15.624095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.200 [2024-12-09 10:49:15.640955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.200 [2024-12-09 10:49:15.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.200 [2024-12-09 10:49:15.641873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.200 [2024-12-09 10:49:15.641917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.200 [2024-12-09 10:49:15.642460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.200 [2024-12-09 10:49:15.643049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.200 [2024-12-09 10:49:15.643105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.200 [2024-12-09 10:49:15.643140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.200 [2024-12-09 10:49:15.643174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.200 [2024-12-09 10:49:15.660027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.200 [2024-12-09 10:49:15.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.200 [2024-12-09 10:49:15.660911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.200 [2024-12-09 10:49:15.660953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.200 [2024-12-09 10:49:15.661495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.200 [2024-12-09 10:49:15.662077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.200 [2024-12-09 10:49:15.662133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.662169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.662203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.679080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.679942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.680014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.680055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.680598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.681178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.681234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.681269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.681305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.698157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.698990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.699062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.699104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.699659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.700238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.700293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.700328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.700361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.717237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.718071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.718144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.718186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.718753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.719326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.719381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.719416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.719450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 7460.67 IOPS, 29.14 MiB/s [2024-12-09T09:49:15.855Z] [2024-12-09 10:49:15.740498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.741300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.741373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.741414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.741981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.742538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.742593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.742628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.742661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.759517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.760388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.760460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.760500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.761067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.761624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.761691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.761746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.761786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.778637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.779528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.779643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.780125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.780374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.780409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.780426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.780442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.797588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.798479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.798552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.798594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.799157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.799713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.799789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.799825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.799858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.816759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.817570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.817641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.817682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.818247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.818824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.818879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.201 [2024-12-09 10:49:15.818913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.201 [2024-12-09 10:49:15.818947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.201 [2024-12-09 10:49:15.835814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.201 [2024-12-09 10:49:15.836626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.201 [2024-12-09 10:49:15.836699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.201 [2024-12-09 10:49:15.836764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.201 [2024-12-09 10:49:15.837320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.201 [2024-12-09 10:49:15.837896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.201 [2024-12-09 10:49:15.837953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.202 [2024-12-09 10:49:15.837988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.202 [2024-12-09 10:49:15.838022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.464 [2024-12-09 10:49:15.854878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.464 [2024-12-09 10:49:15.855718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.464 [2024-12-09 10:49:15.855803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.464 [2024-12-09 10:49:15.855845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.464 [2024-12-09 10:49:15.856386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.464 [2024-12-09 10:49:15.856963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.464 [2024-12-09 10:49:15.857018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.464 [2024-12-09 10:49:15.857064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.464 [2024-12-09 10:49:15.857098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.464 [2024-12-09 10:49:15.873975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.464 [2024-12-09 10:49:15.874770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.464 [2024-12-09 10:49:15.874843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.464 [2024-12-09 10:49:15.874885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.464 [2024-12-09 10:49:15.875428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.876005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.876069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.876103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.876139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.892989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.893823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.893907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.893951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.894493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.895075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.895130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.895165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.895198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.912102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.912940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.913013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.913054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.913596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.914175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.914231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.914266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.914299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.931350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.932134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.932206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.932248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.932818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.933373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.933430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.933465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.933499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.950361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.951189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.951260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.951301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.951883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.952440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.952494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.952530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.952563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.969422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.970281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.970353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.970395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.970963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.971545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.971600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.971634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.971668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:15.988529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.465 [2024-12-09 10:49:15.989400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.465 [2024-12-09 10:49:15.989472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.465 [2024-12-09 10:49:15.989514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.465 [2024-12-09 10:49:15.990083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.465 [2024-12-09 10:49:15.990639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.465 [2024-12-09 10:49:15.990694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.465 [2024-12-09 10:49:15.990747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.465 [2024-12-09 10:49:15.990786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.465 [2024-12-09 10:49:16.007633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.008471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.008544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.008585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.009151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.009714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.009786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.009835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.009870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.466 [2024-12-09 10:49:16.026742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.027547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.027618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.027659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.028223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.028798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.028854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.028890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.028924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.466 [2024-12-09 10:49:16.045781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.046587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.046702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.047197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.047488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.047523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.047547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.047582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.466 [2024-12-09 10:49:16.064972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.065796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.065870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.065912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.066456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.067032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.067086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.067121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.067155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.466 [2024-12-09 10:49:16.084045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.084871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.084944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.084985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.085527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.086109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.086165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.086200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.086235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.466 [2024-12-09 10:49:16.103100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.466 [2024-12-09 10:49:16.103927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.466 [2024-12-09 10:49:16.104000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.466 [2024-12-09 10:49:16.104042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.466 [2024-12-09 10:49:16.104586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.466 [2024-12-09 10:49:16.105169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.466 [2024-12-09 10:49:16.105226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.466 [2024-12-09 10:49:16.105261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.466 [2024-12-09 10:49:16.105295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.729 [2024-12-09 10:49:16.122230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.123062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.123134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.123176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.123718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.124300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.124358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.124395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.124430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.141311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.142188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.142273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.142317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.142883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.143440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.143495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.143529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.143564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.160431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.161283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.161356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.161398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.161967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.162523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.162578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.162613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.162647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.179504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.180359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.180431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.180473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.181039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.181609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.181664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.181698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.181751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.198357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.199218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.199291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.199333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.199898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.200468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.200524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.200559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.200594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.217493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.218309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.218380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.218422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.218992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.219547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.219601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.219636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.219669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.236562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.237406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.237479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.237521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.238088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.238644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.238698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.238749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.238788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.255709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.256553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.256625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.256666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.257228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.257801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.257856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.257906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.257941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.274826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.730 [2024-12-09 10:49:16.275674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.730 [2024-12-09 10:49:16.275762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.730 [2024-12-09 10:49:16.275807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.730 [2024-12-09 10:49:16.276350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.730 [2024-12-09 10:49:16.276929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.730 [2024-12-09 10:49:16.276984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.730 [2024-12-09 10:49:16.277018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.730 [2024-12-09 10:49:16.277052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.730 [2024-12-09 10:49:16.293904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.731 [2024-12-09 10:49:16.294666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-12-09 10:49:16.294750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.731 [2024-12-09 10:49:16.294794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.731 [2024-12-09 10:49:16.295336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.731 [2024-12-09 10:49:16.295909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.731 [2024-12-09 10:49:16.295966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.731 [2024-12-09 10:49:16.296002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.731 [2024-12-09 10:49:16.296035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.731 [2024-12-09 10:49:16.312693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.731 [2024-12-09 10:49:16.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-12-09 10:49:16.313635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.731 [2024-12-09 10:49:16.313677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.731 [2024-12-09 10:49:16.314261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.731 [2024-12-09 10:49:16.314833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.731 [2024-12-09 10:49:16.314888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.731 [2024-12-09 10:49:16.314924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.731 [2024-12-09 10:49:16.314959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.731 [2024-12-09 10:49:16.331856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.731 [2024-12-09 10:49:16.332654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-12-09 10:49:16.332746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.731 [2024-12-09 10:49:16.332793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.731 [2024-12-09 10:49:16.333336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.731 [2024-12-09 10:49:16.333911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.731 [2024-12-09 10:49:16.333967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.731 [2024-12-09 10:49:16.334002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.731 [2024-12-09 10:49:16.334037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.731 [2024-12-09 10:49:16.350887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.731 [2024-12-09 10:49:16.351738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-12-09 10:49:16.351812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.731 [2024-12-09 10:49:16.351853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.731 [2024-12-09 10:49:16.352397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.731 [2024-12-09 10:49:16.352975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.731 [2024-12-09 10:49:16.353031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.731 [2024-12-09 10:49:16.353077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.731 [2024-12-09 10:49:16.353111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.731 [2024-12-09 10:49:16.369981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.731 [2024-12-09 10:49:16.370818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.731 [2024-12-09 10:49:16.370891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.731 [2024-12-09 10:49:16.370932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.731 [2024-12-09 10:49:16.371475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.731 [2024-12-09 10:49:16.372057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.731 [2024-12-09 10:49:16.372113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.731 [2024-12-09 10:49:16.372148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.731 [2024-12-09 10:49:16.372181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.992 [2024-12-09 10:49:16.386379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.992 [2024-12-09 10:49:16.387097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.992 [2024-12-09 10:49:16.387170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.992 [2024-12-09 10:49:16.387225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.992 [2024-12-09 10:49:16.387791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.992 [2024-12-09 10:49:16.388041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.992 [2024-12-09 10:49:16.388070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.992 [2024-12-09 10:49:16.388087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.992 [2024-12-09 10:49:16.388104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.992 [2024-12-09 10:49:16.405585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.992 [2024-12-09 10:49:16.406423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.992 [2024-12-09 10:49:16.406495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.992 [2024-12-09 10:49:16.406536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.992 [2024-12-09 10:49:16.407103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.992 [2024-12-09 10:49:16.407664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.992 [2024-12-09 10:49:16.407737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.407781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.407816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.424695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.425551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.425623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.425665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.426231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.426807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.426864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.426898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.426933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.443794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.444635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.444705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.444768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.445312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.445905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.445961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.445996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.446030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.462894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.463666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.463755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.463801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.464343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.464929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.464985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.465020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.465054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.481007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.481788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.481821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.481839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.482203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.482786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.482811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.482827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.482842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.498164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.498986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.499018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.499036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.499587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.499972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.500036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.500086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.500122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.517377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.518249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.518322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.518365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.518935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.519493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.519549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.519584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.519619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.536511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.537131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.537204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.537245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.537809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.538362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.538417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.538451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.538485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.553829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.554391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.554434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.554453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.554694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.554953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.554981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.554997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.555013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.993 [2024-12-09 10:49:16.572967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.993 [2024-12-09 10:49:16.573746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.993 [2024-12-09 10:49:16.573819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.993 [2024-12-09 10:49:16.573867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.993 [2024-12-09 10:49:16.574410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.993 [2024-12-09 10:49:16.574985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.993 [2024-12-09 10:49:16.575051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.993 [2024-12-09 10:49:16.575085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.993 [2024-12-09 10:49:16.575128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.994 [2024-12-09 10:49:16.592012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.994 [2024-12-09 10:49:16.592796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.994 [2024-12-09 10:49:16.592872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.994 [2024-12-09 10:49:16.592915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.994 [2024-12-09 10:49:16.593456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.994 [2024-12-09 10:49:16.594031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.994 [2024-12-09 10:49:16.594097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.994 [2024-12-09 10:49:16.594132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.994 [2024-12-09 10:49:16.594165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.994 [2024-12-09 10:49:16.611049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.994 [2024-12-09 10:49:16.611885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.994 [2024-12-09 10:49:16.611956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.994 [2024-12-09 10:49:16.611997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.994 [2024-12-09 10:49:16.612538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.994 [2024-12-09 10:49:16.613121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.994 [2024-12-09 10:49:16.613176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.994 [2024-12-09 10:49:16.613211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.994 [2024-12-09 10:49:16.613244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:31.994 [2024-12-09 10:49:16.630158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:31.994 [2024-12-09 10:49:16.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:31.994 [2024-12-09 10:49:16.630984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:31.994 [2024-12-09 10:49:16.631037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:31.994 [2024-12-09 10:49:16.631581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:31.994 [2024-12-09 10:49:16.632158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:31.994 [2024-12-09 10:49:16.632214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:31.994 [2024-12-09 10:49:16.632248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:31.994 [2024-12-09 10:49:16.632281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.256 [2024-12-09 10:49:16.649166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.256 [2024-12-09 10:49:16.649937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.256 [2024-12-09 10:49:16.650009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.256 [2024-12-09 10:49:16.650049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.256 [2024-12-09 10:49:16.650591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.256 [2024-12-09 10:49:16.651163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.256 [2024-12-09 10:49:16.651220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.256 [2024-12-09 10:49:16.651254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.256 [2024-12-09 10:49:16.651287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.256 [2024-12-09 10:49:16.668231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.256 [2024-12-09 10:49:16.668999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.256 [2024-12-09 10:49:16.669069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.256 [2024-12-09 10:49:16.669109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.256 [2024-12-09 10:49:16.669651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.256 [2024-12-09 10:49:16.670230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.256 [2024-12-09 10:49:16.670284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.256 [2024-12-09 10:49:16.670319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.256 [2024-12-09 10:49:16.670352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.256 [2024-12-09 10:49:16.687265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.256 [2024-12-09 10:49:16.688123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.256 [2024-12-09 10:49:16.688194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.256 [2024-12-09 10:49:16.688236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.256 [2024-12-09 10:49:16.688801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.256 [2024-12-09 10:49:16.689371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.256 [2024-12-09 10:49:16.689424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.256 [2024-12-09 10:49:16.689458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.256 [2024-12-09 10:49:16.689491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.706399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.707262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.707334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.707374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.707947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.708510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.708563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.708596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.708629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.725564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.726431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.726503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.726544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.727114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.727670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.727742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.727783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.727817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 5595.50 IOPS, 21.86 MiB/s [2024-12-09T09:49:16.911Z] [2024-12-09 10:49:16.744518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.745346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.745418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.745459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.746028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.746583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.746638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.746694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.746747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.763618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.764466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.764536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.764576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.765145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.765701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.765785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.765824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.765857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.782752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.783592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.783662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.783703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.784268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.784849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.784904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.784939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.784973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.801857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.802699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.802788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.802854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.803398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.803978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.804032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.804066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.804099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.820871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.821749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.821822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.821864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.822406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.822982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.823038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.823071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.823104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.839998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.840859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.840931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.840971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.841513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.842097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.842153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.842188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.257 [2024-12-09 10:49:16.842220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.257 [2024-12-09 10:49:16.859115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.257 [2024-12-09 10:49:16.859958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.257 [2024-12-09 10:49:16.860029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.257 [2024-12-09 10:49:16.860069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.257 [2024-12-09 10:49:16.860612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.257 [2024-12-09 10:49:16.861191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.257 [2024-12-09 10:49:16.861246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.257 [2024-12-09 10:49:16.861282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.258 [2024-12-09 10:49:16.861315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.258 [2024-12-09 10:49:16.878207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.258 [2024-12-09 10:49:16.879082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.258 [2024-12-09 10:49:16.879153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.258 [2024-12-09 10:49:16.879208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.258 [2024-12-09 10:49:16.879777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.258 [2024-12-09 10:49:16.880333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.258 [2024-12-09 10:49:16.880386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.258 [2024-12-09 10:49:16.880420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.258 [2024-12-09 10:49:16.880453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.258 [2024-12-09 10:49:16.897430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.258 [2024-12-09 10:49:16.898299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.258 [2024-12-09 10:49:16.898370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.258 [2024-12-09 10:49:16.898411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.258 [2024-12-09 10:49:16.898981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.258 [2024-12-09 10:49:16.899536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.258 [2024-12-09 10:49:16.899588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.258 [2024-12-09 10:49:16.899622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.258 [2024-12-09 10:49:16.899655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:16.916573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:16.917451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:16.917533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:16.917574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:16.918144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:16.918705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:16.918778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:16.918814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:16.918847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:16.935716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:16.936596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:16.936667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:16.936707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:16.937275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:16.937868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:16.937925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:16.937960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:16.937992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:16.954872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:16.955684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:16.955772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:16.955817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:16.956360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:16.956939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:16.956993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:16.957028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:16.957061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:16.973964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:16.974818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:16.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:16.974933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:16.975475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:16.976054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:16.976109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:16.976142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:16.976174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:16.993050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:16.993822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:16.993894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:16.993934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:16.994476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:16.995059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:16.995114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:16.995161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:16.995197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:17.012080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:17.012969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:17.013042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:17.013082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:17.013624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:17.014218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:17.014275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:17.014309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:17.014342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:17.031265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:17.032016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:17.032089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:17.032129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:17.032671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:17.033250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:17.033305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:17.033341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:17.033373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.521 [2024-12-09 10:49:17.050288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.521 [2024-12-09 10:49:17.051154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.521 [2024-12-09 10:49:17.051226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.521 [2024-12-09 10:49:17.051266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.521 [2024-12-09 10:49:17.051830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.521 [2024-12-09 10:49:17.052387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.521 [2024-12-09 10:49:17.052439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.521 [2024-12-09 10:49:17.052472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.521 [2024-12-09 10:49:17.052506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.069411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.070300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.070371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.070411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.070982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.071503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.071530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.071546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.071560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.088539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.089417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.089490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.089531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.090101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.090659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.090712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.090770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.090806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.107690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.108555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.108627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.108668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.109234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.109815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.109871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.109906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.109939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.126847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.127740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.127813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.127867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.128412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.128990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.129045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.129080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.129114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.145996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.146860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.146932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.146973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.147516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.148096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.148150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.148184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.148218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.522 [2024-12-09 10:49:17.165112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.522 [2024-12-09 10:49:17.165954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.522 [2024-12-09 10:49:17.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.522 [2024-12-09 10:49:17.166073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.522 [2024-12-09 10:49:17.166615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.522 [2024-12-09 10:49:17.167192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.522 [2024-12-09 10:49:17.167247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.522 [2024-12-09 10:49:17.167280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.522 [2024-12-09 10:49:17.167312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.184226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.185069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.185141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.185183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.185749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.186305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.186372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.186408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.186441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.203317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.204174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.204244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.204285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.204855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.205418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.205472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.205505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.205537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.222452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.223358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.223437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.223479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.224049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.224605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.224657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.224692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.224743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.241637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.242461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.242533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.242574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.243135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.243690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.243773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.243809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.243856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.260740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.261554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.261625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.261665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.262225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.262801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.262855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.262888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.262921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.279886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.280696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.280786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.280829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.281371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.281948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.282003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.282037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.282081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.298940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.299774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.299853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.299892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.300435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.301006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.301061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.785 [2024-12-09 10:49:17.301097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.785 [2024-12-09 10:49:17.301130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.785 [2024-12-09 10:49:17.318027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.785 [2024-12-09 10:49:17.318883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.785 [2024-12-09 10:49:17.318962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.785 [2024-12-09 10:49:17.319003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.785 [2024-12-09 10:49:17.319545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.785 [2024-12-09 10:49:17.320126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.785 [2024-12-09 10:49:17.320181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.320215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.320249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.336801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.337660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.337750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.337796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.338340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.338910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.338964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.338998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.339030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.355906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.356769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.356841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.356881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.357422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.357999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.358054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.358087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.358120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.374996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.375793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.375866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.375906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.376462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.377049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.377105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.377139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.377172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.393870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.394714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.394802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.394842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.395383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.395956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.396012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.396047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.396080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.412934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.413769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.413841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.413881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.414423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.415010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.415074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.415108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.415142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:32.786 [2024-12-09 10:49:17.432037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:32.786 [2024-12-09 10:49:17.432886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.786 [2024-12-09 10:49:17.432957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:32.786 [2024-12-09 10:49:17.432997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:32.786 [2024-12-09 10:49:17.433538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:32.786 [2024-12-09 10:49:17.434124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:32.786 [2024-12-09 10:49:17.434192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:32.786 [2024-12-09 10:49:17.434228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:32.786 [2024-12-09 10:49:17.434260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.048 [2024-12-09 10:49:17.451148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.048 [2024-12-09 10:49:17.452005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.048 [2024-12-09 10:49:17.452076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.048 [2024-12-09 10:49:17.452116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.048 [2024-12-09 10:49:17.452657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.048 [2024-12-09 10:49:17.453234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.048 [2024-12-09 10:49:17.453289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.048 [2024-12-09 10:49:17.453323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.048 [2024-12-09 10:49:17.453355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.048 [2024-12-09 10:49:17.470302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.048 [2024-12-09 10:49:17.471074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.048 [2024-12-09 10:49:17.471147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.048 [2024-12-09 10:49:17.471187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.048 [2024-12-09 10:49:17.471752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.048 [2024-12-09 10:49:17.472308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.048 [2024-12-09 10:49:17.472362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.048 [2024-12-09 10:49:17.472397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.472430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.489306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.490291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.490860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.491416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.491469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.491504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.491550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.503263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.503704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.503744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.503764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.504005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.504250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.504274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.504288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.504303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.517273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.517682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.517713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.517741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.517983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.518228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.518251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.518267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.518282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.531281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.531778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.531811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.531829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.532070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.532316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.532340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.532355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.532369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.545325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.545802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.545834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.545852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.546092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.546336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.546360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.546375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.546389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.559347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.559842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.559874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.559891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.560132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.560376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.560400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.560415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.560430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.573388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.573821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.573852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.573871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.574111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.574362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.574386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.574401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.574416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.587454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.587943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.587975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.587993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.588241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.588487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.588510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.588525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.049 [2024-12-09 10:49:17.588540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.049 [2024-12-09 10:49:17.601517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.049 [2024-12-09 10:49:17.601952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.049 [2024-12-09 10:49:17.601987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.049 [2024-12-09 10:49:17.602005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.049 [2024-12-09 10:49:17.602246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.049 [2024-12-09 10:49:17.602491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.049 [2024-12-09 10:49:17.602515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.049 [2024-12-09 10:49:17.602530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.602544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.615496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.615981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.616012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.616030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.616270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.616516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.616539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.616555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.616569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.629536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.630027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.630059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.630078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.630318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.630563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.630597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.630613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.630628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.643586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.644077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.644110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.644128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.644368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.644613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.644637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.644652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.644666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.657629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.658111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.658143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.658161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.658401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.658647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.658670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.658686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.658700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.671655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.672148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.672180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.672198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.672438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.672683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.672707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.672732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.672756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.685712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.686112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.686143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.686160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.686401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.686646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.686669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.686685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.686699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.050 [2024-12-09 10:49:17.699671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.050 [2024-12-09 10:49:17.700141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.050 [2024-12-09 10:49:17.700173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.050 [2024-12-09 10:49:17.700192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.050 [2024-12-09 10:49:17.700433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.050 [2024-12-09 10:49:17.700677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.050 [2024-12-09 10:49:17.700701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.050 [2024-12-09 10:49:17.700716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.050 [2024-12-09 10:49:17.700740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.713703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 [2024-12-09 10:49:17.714113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.313 [2024-12-09 10:49:17.714164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.313 [2024-12-09 10:49:17.714405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.313 [2024-12-09 10:49:17.714650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.313 [2024-12-09 10:49:17.714674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.313 [2024-12-09 10:49:17.714689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.313 [2024-12-09 10:49:17.714704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.727303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 4476.40 IOPS, 17.49 MiB/s [2024-12-09T09:49:17.967Z] [2024-12-09 10:49:17.729343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.729387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.313 [2024-12-09 10:49:17.729403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.313 [2024-12-09 10:49:17.729600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.313 [2024-12-09 10:49:17.729829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.313 [2024-12-09 10:49:17.729850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.313 [2024-12-09 10:49:17.729863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.313 [2024-12-09 10:49:17.729875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.740633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 [2024-12-09 10:49:17.740980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.741007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.313 [2024-12-09 10:49:17.741044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.313 [2024-12-09 10:49:17.741241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.313 [2024-12-09 10:49:17.741442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.313 [2024-12-09 10:49:17.741462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.313 [2024-12-09 10:49:17.741474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.313 [2024-12-09 10:49:17.741486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.753986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 [2024-12-09 10:49:17.754346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.754371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.313 [2024-12-09 10:49:17.754386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.313 [2024-12-09 10:49:17.754583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.313 [2024-12-09 10:49:17.754811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.313 [2024-12-09 10:49:17.754832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.313 [2024-12-09 10:49:17.754845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.313 [2024-12-09 10:49:17.754858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.767375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 [2024-12-09 10:49:17.767748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.767794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.313 [2024-12-09 10:49:17.767808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.313 [2024-12-09 10:49:17.768044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.313 [2024-12-09 10:49:17.768244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.313 [2024-12-09 10:49:17.768264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.313 [2024-12-09 10:49:17.768276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.313 [2024-12-09 10:49:17.768289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.313 [2024-12-09 10:49:17.780592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.313 [2024-12-09 10:49:17.780955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.313 [2024-12-09 10:49:17.780983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.780998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.781211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.781411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.781431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.781443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.781455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.793823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.794270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.794296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.794325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.794522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.794731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.794767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.794780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.794792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.807132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.807457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.807482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.807497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.807693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.807925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.807950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.807964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.807976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.820472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.820837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.820863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.820878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.821094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.821295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.821314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.821327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.821339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.833801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.834280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.834307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.834338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.834548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.834793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.834816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.834830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.834844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.847260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.847605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.847641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.847657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.847901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.848126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.848147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.848159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.848171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.860588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.860948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.860975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.861004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.861222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.861432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.861452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.861465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.861477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.873850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.874284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.874309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.874338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.874541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.874757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.874789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.874803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.874815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.887105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.887523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.887556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.887586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.887790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.314 [2024-12-09 10:49:17.887990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.314 [2024-12-09 10:49:17.888010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.314 [2024-12-09 10:49:17.888022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.314 [2024-12-09 10:49:17.888035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.314 [2024-12-09 10:49:17.900335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.314 [2024-12-09 10:49:17.900789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.314 [2024-12-09 10:49:17.900820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.314 [2024-12-09 10:49:17.900851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.314 [2024-12-09 10:49:17.901070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.315 [2024-12-09 10:49:17.901270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.315 [2024-12-09 10:49:17.901290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.315 [2024-12-09 10:49:17.901302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.315 [2024-12-09 10:49:17.901314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.315 [2024-12-09 10:49:17.913605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.315 [2024-12-09 10:49:17.914081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.315 [2024-12-09 10:49:17.914121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.315 [2024-12-09 10:49:17.914136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.315 [2024-12-09 10:49:17.914346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.315 [2024-12-09 10:49:17.914546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.315 [2024-12-09 10:49:17.914565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.315 [2024-12-09 10:49:17.914578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.315 [2024-12-09 10:49:17.914589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.315 [2024-12-09 10:49:17.926853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.315 [2024-12-09 10:49:17.927312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.315 [2024-12-09 10:49:17.927352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.315 [2024-12-09 10:49:17.927368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.315 [2024-12-09 10:49:17.927565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.315 [2024-12-09 10:49:17.927794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.315 [2024-12-09 10:49:17.927815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.315 [2024-12-09 10:49:17.927828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.315 [2024-12-09 10:49:17.927840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.315 [2024-12-09 10:49:17.940138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.315 [2024-12-09 10:49:17.940549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.315 [2024-12-09 10:49:17.940574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.315 [2024-12-09 10:49:17.940588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.315 [2024-12-09 10:49:17.940834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.315 [2024-12-09 10:49:17.941056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.315 [2024-12-09 10:49:17.941075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.315 [2024-12-09 10:49:17.941088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.315 [2024-12-09 10:49:17.941100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.315 [2024-12-09 10:49:17.953432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.315 [2024-12-09 10:49:17.953921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.315 [2024-12-09 10:49:17.953962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.315 [2024-12-09 10:49:17.953979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.315 [2024-12-09 10:49:17.954193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.315 [2024-12-09 10:49:17.954393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.315 [2024-12-09 10:49:17.954413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.315 [2024-12-09 10:49:17.954425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.315 [2024-12-09 10:49:17.954437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.578 [2024-12-09 10:49:17.966772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.578 [2024-12-09 10:49:17.967237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.578 [2024-12-09 10:49:17.967262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.578 [2024-12-09 10:49:17.967292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.578 [2024-12-09 10:49:17.967489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:17.967689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:17.967732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:17.967747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:17.967760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:17.980115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:17.980573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:17.980598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:17.980627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:17.980854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:17.981077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:17.981096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:17.981114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:17.981127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:17.993412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:17.993864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:17.993903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:17.993919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:17.994115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:17.994316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:17.994335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:17.994347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:17.994359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.006644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.007126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.007179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.007376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.007576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.007596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.007609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.007621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.019928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.020400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.020439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.020455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.020651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.020893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.020914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.020927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.020939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.033251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.033701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.033750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.033765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.033983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.034200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.034219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.034232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.034243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.046536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.046963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.046989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.047020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.047233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.047434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.047453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.047466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.047478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.059827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.060295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.060319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.060349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.060545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.060775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.060796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.060809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.060821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.073130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.073573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.073616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.073632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.073859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.579 [2024-12-09 10:49:18.074080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.579 [2024-12-09 10:49:18.074100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.579 [2024-12-09 10:49:18.074113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.579 [2024-12-09 10:49:18.074125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.579 [2024-12-09 10:49:18.086408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.579 [2024-12-09 10:49:18.086906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.579 [2024-12-09 10:49:18.086948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.579 [2024-12-09 10:49:18.086966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.579 [2024-12-09 10:49:18.087221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.087434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.087469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.087485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.087503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.099878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.100337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.100377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.100393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.100589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.100822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.100843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.100857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.100869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.113239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.113678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.113718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.113743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.113947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.114171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.114191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.114204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.114216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.126508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.126968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.126994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.127024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.127236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.127436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.127456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.127468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.127481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.139784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.140200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.140224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.140253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.140450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.140650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.140669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.140681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.140693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.153020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.153464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.153503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.153519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.153742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.153949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.153969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.153988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.154015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.166338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.166800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.166840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.166856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.167072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.167273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.167292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.167305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.580 [2024-12-09 10:49:18.167317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.580 [2024-12-09 10:49:18.179608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.580 [2024-12-09 10:49:18.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.580 [2024-12-09 10:49:18.180061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.580 [2024-12-09 10:49:18.180075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.580 [2024-12-09 10:49:18.180285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.580 [2024-12-09 10:49:18.180485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.580 [2024-12-09 10:49:18.180505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.580 [2024-12-09 10:49:18.180518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.581 [2024-12-09 10:49:18.180529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.581 [2024-12-09 10:49:18.192835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.581 [2024-12-09 10:49:18.193300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.581 [2024-12-09 10:49:18.193324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.581 [2024-12-09 10:49:18.193353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.581 [2024-12-09 10:49:18.193550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.581 [2024-12-09 10:49:18.193777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.581 [2024-12-09 10:49:18.193798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.581 [2024-12-09 10:49:18.193812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.581 [2024-12-09 10:49:18.193824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.581 [2024-12-09 10:49:18.206164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.581 [2024-12-09 10:49:18.206612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.581 [2024-12-09 10:49:18.206636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.581 [2024-12-09 10:49:18.206666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.581 [2024-12-09 10:49:18.206892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.581 [2024-12-09 10:49:18.207113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.581 [2024-12-09 10:49:18.207132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.581 [2024-12-09 10:49:18.207145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.581 [2024-12-09 10:49:18.207157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.581 [2024-12-09 10:49:18.219509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.581 [2024-12-09 10:49:18.219926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.581 [2024-12-09 10:49:18.219952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.581 [2024-12-09 10:49:18.219982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.581 [2024-12-09 10:49:18.220195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.581 [2024-12-09 10:49:18.220395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.581 [2024-12-09 10:49:18.220414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.581 [2024-12-09 10:49:18.220427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.581 [2024-12-09 10:49:18.220439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 [2024-12-09 10:49:18.232758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 [2024-12-09 10:49:18.233216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.233256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.233271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 [2024-12-09 10:49:18.233468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 [2024-12-09 10:49:18.233669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.233688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.233701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.233739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 [2024-12-09 10:49:18.246045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 [2024-12-09 10:49:18.246498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.246523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.246558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 [2024-12-09 10:49:18.246784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 [2024-12-09 10:49:18.246991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.247012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.247039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.247051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 [2024-12-09 10:49:18.259367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 [2024-12-09 10:49:18.259823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.259849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.259879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 [2024-12-09 10:49:18.260095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 [2024-12-09 10:49:18.260296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.260315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.260328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.260340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2241301 Killed "${NVMF_APP[@]}" "$@" 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2242251 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2242251 00:38:33.843 [2024-12-09 10:49:18.272809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2242251 ']' 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.843 [2024-12-09 10:49:18.273154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.273181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.273196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.843 [2024-12-09 10:49:18.273392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.843 [2024-12-09 10:49:18.273604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.273624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.273637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.273650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:33.843 [2024-12-09 10:49:18.286262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 [2024-12-09 10:49:18.286601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.286627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.286642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 [2024-12-09 10:49:18.286872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 [2024-12-09 10:49:18.287112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.287132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.287144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.287156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.843 [2024-12-09 10:49:18.299652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.843 [2024-12-09 10:49:18.300084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.843 [2024-12-09 10:49:18.300111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.843 [2024-12-09 10:49:18.300126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.843 [2024-12-09 10:49:18.300322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.843 [2024-12-09 10:49:18.300524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.843 [2024-12-09 10:49:18.300543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.843 [2024-12-09 10:49:18.300555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.843 [2024-12-09 10:49:18.300567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.313016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.313395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.313421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.313435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.313638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.313867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.313888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.313901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.313913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.326226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.326633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.326647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.326884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.327106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.327126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.327139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.327152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.335310] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:33.844 [2024-12-09 10:49:18.335408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.844 [2024-12-09 10:49:18.339567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.339990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.340020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.340052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.340306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.340533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.340556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.340574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.340589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.353042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.353418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.353445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.353459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.353661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.353896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.353918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.353931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.353944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.366337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.366676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.366716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.366741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.366966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.367201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.367221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.367233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.367245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.379562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.379934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.379976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.379990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.380188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.380388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.380407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.380420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.380434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.398501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.399086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.399157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.399200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.399766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.400318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.400384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.400420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.400454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.417455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.418154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.418194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.418765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.419317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.419370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.419404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.844 [2024-12-09 10:49:18.419437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.844 [2024-12-09 10:49:18.436601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.844 [2024-12-09 10:49:18.437156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.844 [2024-12-09 10:49:18.437228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.844 [2024-12-09 10:49:18.437268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.844 [2024-12-09 10:49:18.437837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.844 [2024-12-09 10:49:18.438391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.844 [2024-12-09 10:49:18.438444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.844 [2024-12-09 10:49:18.438478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.845 [2024-12-09 10:49:18.438510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.845 [2024-12-09 10:49:18.455617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.845 [2024-12-09 10:49:18.456180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.845 [2024-12-09 10:49:18.456252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.845 [2024-12-09 10:49:18.456292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.845 [2024-12-09 10:49:18.456860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.845 [2024-12-09 10:49:18.457413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.845 [2024-12-09 10:49:18.457467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.845 [2024-12-09 10:49:18.457501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.845 [2024-12-09 10:49:18.457548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.845 [2024-12-09 10:49:18.474684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.845 [2024-12-09 10:49:18.475469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.845 [2024-12-09 10:49:18.475539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.845 [2024-12-09 10:49:18.475579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.845 [2024-12-09 10:49:18.475957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.845 [2024-12-09 10:49:18.476480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.845 [2024-12-09 10:49:18.476533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.845 [2024-12-09 10:49:18.476568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.845 [2024-12-09 10:49:18.476600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:33.845 [2024-12-09 10:49:18.480177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:33.845 [2024-12-09 10:49:18.493763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:33.845 [2024-12-09 10:49:18.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.845 [2024-12-09 10:49:18.494626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:33.845 [2024-12-09 10:49:18.494670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:33.845 [2024-12-09 10:49:18.494998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:33.845 [2024-12-09 10:49:18.495549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:33.845 [2024-12-09 10:49:18.495602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:33.845 [2024-12-09 10:49:18.495638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:33.845 [2024-12-09 10:49:18.495673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.512904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.513760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.513841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.513885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.514436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.515023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.515079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.515117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.515153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.529858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.530547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.530618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.530659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.531004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.531561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.531615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.531649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.531682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.547111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.547917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.547949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.547968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.548519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.548932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.548958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.548975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.548989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.564798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.565390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.565460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.565501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.565928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.566445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.566499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.566533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.566566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.583483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.584109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.584181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.584238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.584806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.585358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.585411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.585446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.585481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.590434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.108 [2024-12-09 10:49:18.590516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.108 [2024-12-09 10:49:18.590552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.108 [2024-12-09 10:49:18.590581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.108 [2024-12-09 10:49:18.590605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.108 [2024-12-09 10:49:18.593957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:34.108 [2024-12-09 10:49:18.594087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:34.108 [2024-12-09 10:49:18.594091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.108 [2024-12-09 10:49:18.597935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.598427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.598459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.598479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.598731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.598989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.599024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.599040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.599056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.611837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.612418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.612462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.612484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.612749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.613003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.108 [2024-12-09 10:49:18.613029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.108 [2024-12-09 10:49:18.613050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.108 [2024-12-09 10:49:18.613083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.108 [2024-12-09 10:49:18.625907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.108 [2024-12-09 10:49:18.626531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.108 [2024-12-09 10:49:18.626575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.108 [2024-12-09 10:49:18.626597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.108 [2024-12-09 10:49:18.626861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.108 [2024-12-09 10:49:18.627115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.627141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.627161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.627179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.639962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.640648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.640707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.640738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.640992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.641255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.641280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.641300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.641318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.654134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.654868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.655120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.655372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.655397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.655426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.655443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.668257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.668859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.668901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.668923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.669177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.669428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.669453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.669471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.669488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.682278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.682895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.682939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.682961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.683225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.683479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.683504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.683524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.683541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.696294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.696801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.696834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.696852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.697102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.697348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.697373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.697388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.697403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.710355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.710832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.710866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.710885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.711137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.711380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.711404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.711419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.711434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.724423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.724850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.724883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.724901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.725142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.725387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.725410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.725426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.725440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.109 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:34.109 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.109 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.109 3730.33 IOPS, 14.57 MiB/s [2024-12-09T09:49:18.763Z] 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.109 [2024-12-09 10:49:18.738359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.109 [2024-12-09 10:49:18.738827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.109 [2024-12-09 10:49:18.738845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.109 [2024-12-09 10:49:18.739086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.109 [2024-12-09 10:49:18.739332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.109 [2024-12-09 10:49:18.739356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.109 [2024-12-09 10:49:18.739371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.109 [2024-12-09 10:49:18.739386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.109 [2024-12-09 10:49:18.752349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.109 [2024-12-09 10:49:18.752774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.110 [2024-12-09 10:49:18.752806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.110 [2024-12-09 10:49:18.752830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.110 [2024-12-09 10:49:18.753072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.110 [2024-12-09 10:49:18.753318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.110 [2024-12-09 10:49:18.753348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.110 [2024-12-09 10:49:18.753363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.110 [2024-12-09 10:49:18.753378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.371 [2024-12-09 10:49:18.766390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.371 [2024-12-09 10:49:18.766820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.371 [2024-12-09 10:49:18.766858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.371 [2024-12-09 10:49:18.766876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.371 [2024-12-09 10:49:18.767117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.371 [2024-12-09 10:49:18.767190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.371 [2024-12-09 10:49:18.767362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.371 [2024-12-09 10:49:18.767386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.371 [2024-12-09 10:49:18.767402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.371 [2024-12-09 10:49:18.767416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.371 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.372 [2024-12-09 10:49:18.780377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.372 [2024-12-09 10:49:18.780823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.372 [2024-12-09 10:49:18.780868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.372 [2024-12-09 10:49:18.780886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.372 [2024-12-09 10:49:18.781136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.372 [2024-12-09 10:49:18.781382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.372 [2024-12-09 10:49:18.781406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.372 [2024-12-09 10:49:18.781431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.372 [2024-12-09 10:49:18.781446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.372 [2024-12-09 10:49:18.794438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.372 [2024-12-09 10:49:18.794862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.372 [2024-12-09 10:49:18.794894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.372 [2024-12-09 10:49:18.794912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.372 [2024-12-09 10:49:18.795155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.372 [2024-12-09 10:49:18.795400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.372 [2024-12-09 10:49:18.795424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.372 [2024-12-09 10:49:18.795439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.372 [2024-12-09 10:49:18.795454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.372 [2024-12-09 10:49:18.808417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.372 [2024-12-09 10:49:18.808995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.372 [2024-12-09 10:49:18.809044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.372 [2024-12-09 10:49:18.809065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.372 [2024-12-09 10:49:18.809312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.372 [2024-12-09 10:49:18.809562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.372 [2024-12-09 10:49:18.809586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.372 [2024-12-09 10:49:18.809604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.372 [2024-12-09 10:49:18.809621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.372 Malloc0 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.372 [2024-12-09 10:49:18.822376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.372 [2024-12-09 10:49:18.822808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.372 [2024-12-09 10:49:18.822840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x878c60 with addr=10.0.0.2, port=4420 00:38:34.372 [2024-12-09 10:49:18.822858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878c60 is same with the state(6) to be set 00:38:34.372 [2024-12-09 10:49:18.823100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878c60 (9): Bad file descriptor 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.372 [2024-12-09 10:49:18.823344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:34.372 [2024-12-09 10:49:18.823378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:34.372 [2024-12-09 10:49:18.823394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:34.372 [2024-12-09 10:49:18.823409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:34.372 [2024-12-09 10:49:18.835170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.372 [2024-12-09 10:49:18.836394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.372 10:49:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2241589 00:38:34.372 [2024-12-09 10:49:18.866458] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:38:36.261 4220.86 IOPS, 16.49 MiB/s [2024-12-09T09:49:21.853Z] 4755.00 IOPS, 18.57 MiB/s [2024-12-09T09:49:22.789Z] 5194.22 IOPS, 20.29 MiB/s [2024-12-09T09:49:24.169Z] 5548.60 IOPS, 21.67 MiB/s [2024-12-09T09:49:25.107Z] 5839.64 IOPS, 22.81 MiB/s [2024-12-09T09:49:26.040Z] 6083.83 IOPS, 23.76 MiB/s [2024-12-09T09:49:26.979Z] 6299.92 IOPS, 24.61 MiB/s [2024-12-09T09:49:27.917Z] 6487.07 IOPS, 25.34 MiB/s [2024-12-09T09:49:27.918Z] 6639.13 IOPS, 25.93 MiB/s 00:38:43.264 Latency(us) 00:38:43.264 [2024-12-09T09:49:27.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:43.264 Verification LBA range: start 0x0 length 0x4000 00:38:43.264 Nvme1n1 : 15.01 6639.02 25.93 5821.56 0.00 10240.52 952.70 29709.65 00:38:43.264 [2024-12-09T09:49:27.918Z] =================================================================================================================== 00:38:43.264 [2024-12-09T09:49:27.918Z] Total : 6639.02 25.93 5821.56 0.00 10240.52 952.70 29709.65 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.522 10:49:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.522 rmmod nvme_tcp 00:38:43.522 rmmod nvme_fabrics 00:38:43.522 rmmod nvme_keyring 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2242251 ']' 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2242251 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2242251 ']' 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2242251 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2242251 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2242251' 00:38:43.522 killing process with pid 2242251 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2242251 00:38:43.522 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2242251 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.781 10:49:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:46.326 00:38:46.326 real 0m24.060s 00:38:46.326 user 1m1.069s 00:38:46.326 sys 0m5.639s 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:46.326 ************************************ 00:38:46.326 END TEST nvmf_bdevperf 00:38:46.326 ************************************ 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.326 ************************************ 00:38:46.326 START TEST nvmf_target_disconnect 00:38:46.326 ************************************ 00:38:46.326 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:46.326 * Looking for test storage... 00:38:46.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:46.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.327 --rc genhtml_branch_coverage=1 00:38:46.327 --rc genhtml_function_coverage=1 00:38:46.327 --rc genhtml_legend=1 00:38:46.327 --rc geninfo_all_blocks=1 00:38:46.327 --rc geninfo_unexecuted_blocks=1 00:38:46.327 00:38:46.327 ' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:46.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.327 --rc genhtml_branch_coverage=1 00:38:46.327 --rc genhtml_function_coverage=1 00:38:46.327 --rc genhtml_legend=1 00:38:46.327 --rc geninfo_all_blocks=1 00:38:46.327 --rc geninfo_unexecuted_blocks=1 00:38:46.327 00:38:46.327 ' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:46.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.327 --rc genhtml_branch_coverage=1 00:38:46.327 --rc genhtml_function_coverage=1 00:38:46.327 --rc genhtml_legend=1 00:38:46.327 --rc geninfo_all_blocks=1 00:38:46.327 --rc geninfo_unexecuted_blocks=1 00:38:46.327 00:38:46.327 ' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:46.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:46.327 --rc genhtml_branch_coverage=1 00:38:46.327 --rc genhtml_function_coverage=1 00:38:46.327 --rc genhtml_legend=1 00:38:46.327 --rc geninfo_all_blocks=1 00:38:46.327 --rc geninfo_unexecuted_blocks=1 00:38:46.327 00:38:46.327 ' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:46.327 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:46.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:46.328 10:49:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.632 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:49.633 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:49.633 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:49.633 Found net devices under 0000:84:00.0: cvl_0_0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:49.633 Found net devices under 0000:84:00.1: cvl_0_1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:38:49.633 00:38:49.633 --- 10.0.0.2 ping statistics --- 00:38:49.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.633 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:38:49.633 00:38:49.633 --- 10.0.0.1 ping statistics --- 00:38:49.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.633 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:49.633 ************************************ 00:38:49.633 START TEST nvmf_target_disconnect_tc1 00:38:49.633 ************************************ 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:49.633 10:49:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:49.633 [2024-12-09 10:49:34.107017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.634 [2024-12-09 10:49:34.107175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x698570 with addr=10.0.0.2, port=4420 00:38:49.634 [2024-12-09 10:49:34.107270] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:49.634 [2024-12-09 10:49:34.107330] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:49.634 [2024-12-09 10:49:34.107365] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:49.634 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:49.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:49.634 Initializing NVMe Controllers 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:49.634 00:38:49.634 real 0m0.231s 00:38:49.634 user 0m0.111s 00:38:49.634 sys 0m0.118s 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:49.634 ************************************ 00:38:49.634 END TEST nvmf_target_disconnect_tc1 00:38:49.634 ************************************ 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:49.634 ************************************ 00:38:49.634 START TEST nvmf_target_disconnect_tc2 00:38:49.634 ************************************ 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2245552 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2245552 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2245552 ']' 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.634 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:49.634 [2024-12-09 10:49:34.266408] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:49.634 [2024-12-09 10:49:34.266511] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.895 [2024-12-09 10:49:34.399358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:49.895 [2024-12-09 10:49:34.522296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.895 [2024-12-09 10:49:34.522411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.895 [2024-12-09 10:49:34.522447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.895 [2024-12-09 10:49:34.522476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.895 [2024-12-09 10:49:34.522501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.895 [2024-12-09 10:49:34.526153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:49.895 [2024-12-09 10:49:34.526254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:49.895 [2024-12-09 10:49:34.526354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:49.895 [2024-12-09 10:49:34.526364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:50.155 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.155 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:50.155 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:50.155 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:50.155 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.416 Malloc0 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.416 [2024-12-09 10:49:34.896004] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.416 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.417 [2024-12-09 10:49:34.944594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2245583 00:38:50.417 10:49:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:52.329 10:49:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2245552 00:38:52.329 10:49:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 [2024-12-09 10:49:36.975475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 [2024-12-09 10:49:36.975923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Write completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 [2024-12-09 10:49:36.976588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.329 starting I/O failed 00:38:52.329 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Write completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 Read completed with error (sct=0, sc=8) 00:38:52.330 starting I/O failed 00:38:52.330 [2024-12-09 10:49:36.977041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:52.330 [2024-12-09 10:49:36.977350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.977414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.977658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.977709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.977895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.977922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.978081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.978122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.978251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.978275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.978491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.978533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.978659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.978693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.978845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.978872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.979940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.979970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.980158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.980208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.980413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.980437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.980585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.980609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.980792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.980819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.980936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.981957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.981984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.982133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.982173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.982357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.982381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.982555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.330 [2024-12-09 10:49:36.982580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.330 qpair failed and we were unable to recover it. 00:38:52.330 [2024-12-09 10:49:36.982733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.982759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.983006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.983139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.983459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.983527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.983822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.983855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.983986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.984052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.984300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.610 [2024-12-09 10:49:36.984364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.610 qpair failed and we were unable to recover it. 00:38:52.610 [2024-12-09 10:49:36.984669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.984753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.984960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.985013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.985206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.985257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.985442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.985492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.985690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.985738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.985903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.985930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.986083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.986134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.986346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.986497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.986547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.986708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.986743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.986903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.986929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.987948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.987974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.988160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.988183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.988336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.988361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.988543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.988568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.988799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.988928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.988953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.989131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.989183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.989404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.989455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.989631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.989655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.989828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.989898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.990021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.990074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.990244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.990295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.990535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.990714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.990744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.990869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.991110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.991161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.991393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.991443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.991571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.991594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.991745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.991771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.991942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.991990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.992194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.992245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.992396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.992449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.611 [2024-12-09 10:49:36.992588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.611 [2024-12-09 10:49:36.992612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.611 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.992763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.992851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.993023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.993071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.993262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.993312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.993517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.993541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.993643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.993683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.993887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.993946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.994140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.994189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.994381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.994432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.994600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.994623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.994804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.994856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.994974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.995183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.995411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.995746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.995927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.995952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.996110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.996163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.996311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.996335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.996467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.996491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.996617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.996776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.996829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.997976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.997999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.998139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.998163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.998299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.998323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.998474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.998499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.998675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.998700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.998858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.998884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:36.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.612 [2024-12-09 10:49:36.999952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.612 qpair failed and we were unable to recover it. 00:38:52.612 [2024-12-09 10:49:37.000091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.000312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.000461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.000635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.000801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.000948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.001113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.001316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.001355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.001486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.001524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.001666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.001690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.001848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.001873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.002223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.002246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.002419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.002443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.002618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.002642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.002841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.002892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.003967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.004110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.004354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.004378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.004583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.004749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.004775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.004927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.004975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.005097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.005152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.005330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.005353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.005529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.005553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.005706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.005736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.005897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.005949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.006086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.006138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.006328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.006375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.006597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.006621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.613 [2024-12-09 10:49:37.006739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.613 [2024-12-09 10:49:37.006764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.613 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.006955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.007009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.007195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.007246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.007443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.007618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.007642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.007852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.007904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.008097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.008147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.008363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.008415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.008610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.008634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.008813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.008861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.009072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.009123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.009280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.009337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.009495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.009519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.009672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.009695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.009894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.010166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.010217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.010404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.010454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.010653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.010883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.010934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.011095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.011146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.011299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.011350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.011526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.011550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.011708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.011738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.011925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.011975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.012106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.012156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.012333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.012381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.012567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.012590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.012758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.012783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.012904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.012965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.013137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.013189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.013377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.013427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.013525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.013564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.013756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.013799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.013975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.014217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.014267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.014465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.014488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.014664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.014687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.015098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.614 [2024-12-09 10:49:37.015145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.614 qpair failed and we were unable to recover it. 00:38:52.614 [2024-12-09 10:49:37.015390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.015438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.015632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.015656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.015823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.015872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.016056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.016105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.016289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.016341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.016478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.016502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.016667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.016712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.016909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.017043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.017102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.017244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.017291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.017442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.017465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.017633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.017672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.017821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.017846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.018025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.018049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.018231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.018254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.018391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.018415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.018605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.018629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.018823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.019055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.019315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.019363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.019520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.019544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.019686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.019731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.019827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.019914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.020087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.020140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.020319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.020368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.020517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.020541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.020718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.020749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.020925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.020974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.021202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.021440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.021481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.021671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.021694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.021869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.021936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.615 qpair failed and we were unable to recover it. 00:38:52.615 [2024-12-09 10:49:37.022082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.615 [2024-12-09 10:49:37.022131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.022324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.022375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.022540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.022564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.022770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.022794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.022989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.023040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.023258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.023306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.023496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.023546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.023716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.023772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.023932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.023980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.024151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.024340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.024551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.024574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.024711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.024757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.024893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.025077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.025119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.025282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.025332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.025467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.025507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.025687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.025710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.025889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.025913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.026062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.026086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.026264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.026288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.026415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.026597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.026636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.026802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.026853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.027037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.027086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.027318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.027368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.027506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.027529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.027717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.027855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.027881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.028932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.029084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.029108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.029226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.616 [2024-12-09 10:49:37.029250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.616 qpair failed and we were unable to recover it. 00:38:52.616 [2024-12-09 10:49:37.029353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.029377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.029564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.029701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.029747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.029894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.029920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.030106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.030130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.030308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.030331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.030467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.030491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.030638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.030663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.030862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.030915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.031104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.031155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.031325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.031349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.031484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.031522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.031730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.031906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.031966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.032165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.032217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.032367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.032390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.032534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.032558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.032718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.032784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.032904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.032930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.033073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.033111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.033286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.033309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.033487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.033511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.033682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.033705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.033868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.033892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.034049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.034270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.034322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.034561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.034757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.034782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.034932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.034977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.035161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.035209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.035424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.035472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.035648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.035672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.035824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.035913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.036103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.036155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.036304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.617 [2024-12-09 10:49:37.036352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.617 qpair failed and we were unable to recover it. 00:38:52.617 [2024-12-09 10:49:37.036459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.036601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.036625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.036817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.036877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.037935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.037961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.038109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.038138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.038318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.038451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.038698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.038729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.038877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.038903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.039942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.039998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.040151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.040206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.040340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.040379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.040547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.040586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.040748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.040775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.040942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.040994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.041147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.041198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.041369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.041393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.041609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.041632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.041817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.041867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.042033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.042073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.042208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.042261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.042445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.618 [2024-12-09 10:49:37.042469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.618 qpair failed and we were unable to recover it. 00:38:52.618 [2024-12-09 10:49:37.042622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.042646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.042831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.042879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.043069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.043117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.043293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.043344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.043496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.043519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.043728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.043850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.043875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.044029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.044079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.044266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.044289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.044427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.044451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.044596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.044635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.044787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.044812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.045893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.045948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.046087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.046125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.046290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.046314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.046470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.046510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.046651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.046674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.046845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.046871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.047048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.047099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.047259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.047306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.047481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.047504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.047684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.047708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.047920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.047968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.048194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.048398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.048446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.048614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.048638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.048805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.048850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.049010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.049058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.619 [2024-12-09 10:49:37.049249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.619 [2024-12-09 10:49:37.049297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.619 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.049526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.049549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.049718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.049764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.049919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.049968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.050141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.050189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.050401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.050451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.050774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.050837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.051062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.051110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.051270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.051318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.051492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.051515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.051694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.051717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.051899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.051954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.052133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.052301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.052505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.052634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.052800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.052995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.053020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.053208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.053232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.053395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.053418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.053592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.053615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.053824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.053875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.054066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.054116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.054305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.054360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.054583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.054607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.054804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.054860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.055043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.055091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.055329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.055377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.055550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.055574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.055748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.055772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.055931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.055981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.056175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.620 [2024-12-09 10:49:37.056224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.620 qpair failed and we were unable to recover it. 00:38:52.620 [2024-12-09 10:49:37.056424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.056447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.056594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.056617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.056750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.056775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.056962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.057010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.057225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.057274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.057427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.057451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.057652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.057811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.057860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.058047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.058097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.058272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.058322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.058486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.058509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.058692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.058716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.058915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.058962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.059155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.059204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.059446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.059496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.059644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.059667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.059819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.059907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.060079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.060126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.060296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.060343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.060514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.060538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.060673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.060712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.060869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.060908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.061077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.061100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.061249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.061272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.061417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.061456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.061627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.061651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.061858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.062050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.062099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.062345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.062395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.062572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.062596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.062788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.062855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.063074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.063279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.063476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.063674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.063881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.063986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.064026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.064182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.064206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.064335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.621 [2024-12-09 10:49:37.064374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.621 qpair failed and we were unable to recover it. 00:38:52.621 [2024-12-09 10:49:37.064546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.064570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.064781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.064833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.065009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.065066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.065223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.065273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.065447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.065471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.065647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.065878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.065928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.066090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.066138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.066375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.066593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.066797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.066822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.067041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.067187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.067410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.067433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.067609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.067633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.067801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.067853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.068040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.068089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.068185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.068272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.068421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.068459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.068647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.068671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.068857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.068907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.069088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.069139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.069291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.069342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.069474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.069512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.069676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.069714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.069910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.069959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.070174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.070383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.070431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.070603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.070627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.070807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.070858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.071044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.071091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.071314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.071364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.071512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.071540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.071705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.071759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.071961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.072011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.072258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.072472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.072521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.072649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.072672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.622 qpair failed and we were unable to recover it. 00:38:52.622 [2024-12-09 10:49:37.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.622 [2024-12-09 10:49:37.072937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.073106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.073154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.073359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.073409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.073633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.073657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.073798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.074069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.074291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.074340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.074524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.074719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.074751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.074855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.074880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.075080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.075328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.075376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.075509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.075533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.075664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.075688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.075852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.075902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.076096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.076147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.076348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.076396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.076592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.076791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.076815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.077198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.077250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.077506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.077555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.077677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.077701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.077893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.077943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.078144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.078195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.078350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.078401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.078581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.078605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.078707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.078737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.078891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.079135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.079185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.079401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.079449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.079586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.079624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.079769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.079810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.079981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.080043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.080173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.080262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.080431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.623 [2024-12-09 10:49:37.080633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.623 [2024-12-09 10:49:37.080656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.623 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.080852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.080907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.081108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.081157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.081302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.081326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.081469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.081493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.081637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.081676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.081827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.081885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.082076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.082123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.082326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.082376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.082519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.082719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.082922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.082972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.083193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.083242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.083386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.083436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.083607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.083750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.083776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.084061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.084299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.084509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.084532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.084683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.084728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.084883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.084933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.085061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.085123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.085282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.085334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.085466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.085505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.085672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.085711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.085924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.085975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.086154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.086202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.086408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.086457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.086690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.086738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.086940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.086992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.087169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.087217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.087415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.087465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.087648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.087672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.087877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.087929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.624 qpair failed and we were unable to recover it. 00:38:52.624 [2024-12-09 10:49:37.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.624 [2024-12-09 10:49:37.088205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.088427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.088478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.088633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.088657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.088849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.088909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.089087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.089141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.089325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.089375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.089537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.089560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.089789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.089838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.090018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.090066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.090296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.090347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.090483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.090507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.090674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.090712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.090883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.090934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.091106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.091155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.091380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.091429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.091612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.091636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.091817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.091866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.092088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.092138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.092369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.092415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.092607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.092631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.092801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.092851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.093916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.094067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.094266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.094432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.094565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.625 [2024-12-09 10:49:37.094783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.625 qpair failed and we were unable to recover it. 00:38:52.625 [2024-12-09 10:49:37.094913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.094938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.095090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.095139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.095297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.095321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.095494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.095518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.095689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.095713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.095860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.095910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.096088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.096137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.096333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.096384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.096564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.096587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.096744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.096769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.096917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.096956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.097146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.097309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.097337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.097507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.097530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.097706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.097737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.097890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.097940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.098100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.098151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.098303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.098326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.098512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.098536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.098703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.098751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.098909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.098960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.099140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.099189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.099357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.099399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.099596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.099619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.099763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.099804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.099914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.099978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.100153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.100200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.100382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.100432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.100659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.100883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.101156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.101206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.101450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.101653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.101677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.101844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.102075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.102125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.626 [2024-12-09 10:49:37.102310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.626 [2024-12-09 10:49:37.102360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.626 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.102612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.102752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.102807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.103018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.103068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.103269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.103318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.103459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.103482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.103647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.103686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.103892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.103942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.104190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.104238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.104399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.104664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.104687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.104893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.104945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.105137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.105186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.105380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.105431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.105580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.105604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.105747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.105788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.105936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.105985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.106178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.106336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.106380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.106552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.106576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.106714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.106962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.107012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.107211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.107263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.107395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.107419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.107587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.107771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.107797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.107982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.108034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.108214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.108266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.108467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.108490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.108640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.108663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.108856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.109027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.109083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.109261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.109308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.109513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.109537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.109710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.109758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.627 [2024-12-09 10:49:37.109973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.627 [2024-12-09 10:49:37.110025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.627 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.110174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.110222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.110388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.110412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.110598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.110621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.110801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.110851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.110999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.111052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.111236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.111282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.111520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.111571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.111763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.111788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.111981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.112035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.112218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.112268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.112449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.112499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.112660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.112684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.112877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.112929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.113111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.113162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.113347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.113397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.113585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.113608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.113738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.113777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.113954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.113997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.114206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.114257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.114452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.114500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.114708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.114755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.114888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.114942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.115131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.115181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.115404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.115453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.115589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.115613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.115802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.115854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.116082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.116131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.116287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.116460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.116499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.116664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.116702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.116877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.116901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.117073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.117112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.117308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.628 [2024-12-09 10:49:37.117357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.628 qpair failed and we were unable to recover it. 00:38:52.628 [2024-12-09 10:49:37.117530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.117554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.117731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.117756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.117946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.117997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.118175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.118223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.118374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.118422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.118558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.118596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.118779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.118848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.119039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.119220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.119268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.119494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.119761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.119786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.119934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.119985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.120136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.120182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.120360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.120408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.120576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.120599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.120747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.120786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.120970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.121019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.121234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.121285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.121465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.121489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.121669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.121806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.121846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.121983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.122037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.122204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.122254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.122429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.122452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.122632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.122829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.122887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.123055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.123078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.123266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.123317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.123525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.123552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.123759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.123930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.123980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.124184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.629 [2024-12-09 10:49:37.124394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.629 qpair failed and we were unable to recover it. 00:38:52.629 [2024-12-09 10:49:37.124562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.124719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.124960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.125016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.125220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.125270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.125401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.125461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.125593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.125632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.125794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.125834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.126006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.126058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.126281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.126329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.126571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.126594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.126733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.126757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.126918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.126975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.127120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.127173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.127353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.127400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.127554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.127577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.127739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.127779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.127957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.128249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.128427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.128581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.128954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.128978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.129186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.129344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.129465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.129629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.129804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.129981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.130005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.130152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.630 [2024-12-09 10:49:37.130202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.630 qpair failed and we were unable to recover it. 00:38:52.630 [2024-12-09 10:49:37.130368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.130391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.130590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.130735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.130760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.130894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.130945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.131109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.131524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.131551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.131694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.131718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.131903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.131954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.132156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.132212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.132432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.132481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.132674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.132697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.132847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.132934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.133073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.133127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.133359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.133558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.133581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.133749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.133773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.133965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.134015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.134208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.134258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.134410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.134462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.134622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.134646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.134811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.134865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.135030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.135080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.135230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.135279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.135463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.135486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.135641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.135665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.135824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.135875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.136027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.136073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.136249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.136299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.136496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.136520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.136687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.136711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.136877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.136927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.137078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.137126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.137302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.137325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.137420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.137445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.631 [2024-12-09 10:49:37.137620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.631 [2024-12-09 10:49:37.137644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.631 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.137778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.137818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.138000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.138058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.138234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.138286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.138425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.138449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.138612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.138651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.138847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.138911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.139090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.139343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.139392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.139565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.139588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.139742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.139767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.139941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.139998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.140208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.140259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.140450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.140502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.140726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.140750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.140942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.141163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.141211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.141357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.141407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.141548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.141585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.141748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.141787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.141930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.141984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.142129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.142186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.142372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.142420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.142624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.142781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.142840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.143870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.143895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.144097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.144273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.144463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.144638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.144807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.632 [2024-12-09 10:49:37.144979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.632 [2024-12-09 10:49:37.145039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.632 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.145254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.145303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.145508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.145535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.145697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.145743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.145933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.145983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.146133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.146182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.146334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.146385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.146560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.146584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.146771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.146931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.146981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.147164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.147232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.147419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.147468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.147634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.147657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.147803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.147872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.148065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.148114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.148303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.148355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.148518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.148542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.148718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.148751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.148942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.148994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.149141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.149190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.149346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.149547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.149570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.149714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.149775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.149948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.149997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.150218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.150267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.150415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.150439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.150571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.150595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.150755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.150780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.150970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.151024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.151248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.151300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.151521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.151544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.151710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.151767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.151990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.152134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.152182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.152369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.152419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.152538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.152776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.152802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.153018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.153066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.153229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.633 [2024-12-09 10:49:37.153314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.633 qpair failed and we were unable to recover it. 00:38:52.633 [2024-12-09 10:49:37.153439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.153464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.153638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.153662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.153850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.153904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.154939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.155105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.155129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.155296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.155334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.155468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.155492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.155628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.155652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.155794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.155889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.156097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.156147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.156262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.156312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.156493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.156517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.156666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.156690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.156850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.156900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.157088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.157139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.157344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.157387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.157534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.157558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.157696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.157742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.157909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.157933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.158886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.158947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.159099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.159138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.159278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.159303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.159441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.159465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.634 [2024-12-09 10:49:37.159685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.634 qpair failed and we were unable to recover it. 00:38:52.634 [2024-12-09 10:49:37.159923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.159971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.160142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.160194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.160375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.160425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.160591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.160614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.160780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.160805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.160967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.161021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.161188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.161236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.161429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.161453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.161605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.161628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.161807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.161862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.162066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.162089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.162276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.162324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.162548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.162571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.162708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.162740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.162923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.162973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.163151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.163200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.163389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.163440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.163609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.163632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.163837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.163888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.164042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.164093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.164252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.164304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.164473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.164496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.164681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.164705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.164899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.164949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.165171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.165222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.165379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.165428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.165560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.165597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.165738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.165772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.166049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.166096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.166304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.166355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.166542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.166565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.166751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.166794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.166994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.167053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.167270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.167320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.167495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.167519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.167691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.167714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.167894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.167952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.168100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.168161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.635 qpair failed and we were unable to recover it. 00:38:52.635 [2024-12-09 10:49:37.168410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.635 [2024-12-09 10:49:37.168460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.168599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.168787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.168847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.168967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.169202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.169400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.169576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.169778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.169945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.169969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.170148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.170338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.170501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.170709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.170882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.170979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.171171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.171364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.171567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.171734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.171851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.171876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.172058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.172109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.172323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.172373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.172544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.172771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.172795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.172943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.172993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.173151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.173206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.173357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.173380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.173547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.173738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.173762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.173960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.174007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.174243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.174291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.174466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.174489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.174631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.174669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.174917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.175100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.175151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.175276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.175338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.175516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.175539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.175728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.636 [2024-12-09 10:49:37.175752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.636 qpair failed and we were unable to recover it. 00:38:52.636 [2024-12-09 10:49:37.175912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.175963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.176163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.176306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.176364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.176542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.176566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.176715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.176784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.176984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.177132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.177156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.177271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.177309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.177474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.177512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.177709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.177886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.177911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.178039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.178077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.178229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.178253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.178459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.178487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.178695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.178742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.178935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.178984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.179148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.179202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.179381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.179431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.179673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.179844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.179894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.180057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.180120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.180316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.180483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.180507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.180685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.180708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.180878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.180930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.181115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.181162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.181390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.181620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.181644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.181805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.181856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.182045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.182096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.182313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.182529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.182552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.182691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.182736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.182938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.637 [2024-12-09 10:49:37.182986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.637 qpair failed and we were unable to recover it. 00:38:52.637 [2024-12-09 10:49:37.183159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.183210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.183392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.183440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.183641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.183664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.183842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.183910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.184058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.184114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.184301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.184350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.184556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.184580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.184715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.184772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.184954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.185006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.185226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.185276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.185466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.185528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.185701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.185733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.185923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.185975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.186144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.186194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.186425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.186476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.186618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.186641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.186778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.186813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.187001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.187050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.187197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.187243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.187425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.187479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.187688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.187711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.187916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.187965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.188169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.188220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.188450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.188500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.188757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.188954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.189017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.189221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.189269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.189472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.189525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.189692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.189715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.189875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.189927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.190122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.190173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.190388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.190438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.190595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.190619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.190817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.638 qpair failed and we were unable to recover it. 00:38:52.638 [2024-12-09 10:49:37.191022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.638 [2024-12-09 10:49:37.191074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.191299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.191350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.191516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.191540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.191709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.191754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.191931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.191978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.192117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.192165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.192353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.192583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.192607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.192760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.192784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.192975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.193024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.193209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.193259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.193440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.193488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.193665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.193689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.193883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.193934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.194169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.194220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.194369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.194421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.194576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.194793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.194818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.194972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.195176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.195199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.195300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.195324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.195514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.195551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.195716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.195746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.195899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.195950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.196094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.196136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.196284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.196515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.196539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.196677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.196715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.196860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.196899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.639 [2024-12-09 10:49:37.197078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.639 [2024-12-09 10:49:37.197102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.639 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.197279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.197303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.197471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.197495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.197666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.197689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.197890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.197941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.198204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.198436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.198487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.198627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.198650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.198798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.198886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.199119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.199167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.199319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.199368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.199534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.199558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.199746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.199926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.199974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.200148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.200196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.200384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.200433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.200576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.200599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.200784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.200840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.201076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.201126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.201334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.201665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.201703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.201851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.201890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.202022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.202047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.202223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.202412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.202478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.202651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.202675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.202864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.202913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.203092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.203141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.203322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.203368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.203566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.203589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.203760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.203826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.204069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.204126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.204277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.640 [2024-12-09 10:49:37.204334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.640 qpair failed and we were unable to recover it. 00:38:52.640 [2024-12-09 10:49:37.204508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.204532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.204715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.204744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.204933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.204986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.205203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.205410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.205458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.205629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.205652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.205855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.206097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.206148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.206293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.206317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.206484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.206508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.206654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.206693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.206862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.206912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.207062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.207120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.207290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.207337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.207562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.207585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.207733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.207772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.207876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.207900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.208072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.208109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.208285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.208323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.208468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.208492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.208656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.208695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.208949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.208998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.209200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.209247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.209429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.209478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.209576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.209615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.209802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.209872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.210044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.210091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.210372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.210537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.210561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.210752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.210794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.211029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.211082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.211230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.211288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.211438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.211461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.211624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.211661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.211812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.211863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.212014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.212039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.212225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.212248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.641 qpair failed and we were unable to recover it. 00:38:52.641 [2024-12-09 10:49:37.212374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.641 [2024-12-09 10:49:37.212429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.212579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.212617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.212757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.212783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.212929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.212986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.213154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.213204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.213410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.213465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.213635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.213791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.213816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.213969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.214175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.214381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.214538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.214664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.214898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.214948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.215163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.215212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.215408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.215459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.215608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.215772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.215842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.216089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.216285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.216645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.216804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.216970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.217020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.217162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.217185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.217352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.217376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.217526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.217565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.217717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.217778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.217958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.218007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.218185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.218233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.218454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.218650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.218674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.218861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.218921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.219067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.219118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.219310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.219359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.219581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.219605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.219744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.219768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.642 [2024-12-09 10:49:37.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.642 [2024-12-09 10:49:37.219970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.642 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.220074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.220327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.220376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.220596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.220620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.220765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.220789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.220979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.221027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.221230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.221281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.221448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.221472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.221652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.221679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.221841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.221893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.222054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.222103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.222289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.222338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.222540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.222564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.222742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.222805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.222981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.223043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.223269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.223316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.223512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.223536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.223702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.223747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.223932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.223981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.224153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.224202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.224384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.224431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.224648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.224895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.225065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.225261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.225310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.225498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.225797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.225846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.225999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.226201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.226370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.226495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.226653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.226838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.226864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.227204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.227353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.227537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.643 [2024-12-09 10:49:37.227734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.643 qpair failed and we were unable to recover it. 00:38:52.643 [2024-12-09 10:49:37.227897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.227943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.228122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.228166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.228293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.228348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.228528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.228552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.228978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.229033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.229254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.229305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.229490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.229751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.229778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.229956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.230011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.230240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.230296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.230487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.230539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.230766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.230793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.230924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.230969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.231113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.231165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.231325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.231372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.231542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.231567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.231677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.231702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.231864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.231916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.232155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.232204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.232445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.232492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.232671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.232869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.232921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.233068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.233118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.233252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.233314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.233460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.233510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.233664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.233690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.233841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.233896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.644 qpair failed and we were unable to recover it. 00:38:52.644 [2024-12-09 10:49:37.234827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.644 [2024-12-09 10:49:37.234853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.235870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.235897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.236881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.237076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.237128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.237280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.237332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.237499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.237545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.237661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.237688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.237867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.237893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.238081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.238303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.238337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.238485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.238518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.238698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.238739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.238946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.239102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.239136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.239377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.239441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.239662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.239688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.239831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.239858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.240024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.240058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.240265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.240299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.240472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.240509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.240628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.240662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.240836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.240884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.241050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.241114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.241386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.241455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.241752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.241795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.241925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.241951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.242083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.242109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.242238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.242267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.242372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.242397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.645 [2024-12-09 10:49:37.242532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.645 [2024-12-09 10:49:37.242565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.645 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.242818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.242845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.242984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.243010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.243196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.243261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.243583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.243646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.243953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.243979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.244116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.244143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.244276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.244302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.646 [2024-12-09 10:49:37.244449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.646 [2024-12-09 10:49:37.244477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.646 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.244697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.244740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.244859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.244884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.245042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.245104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.245356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.245417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.245611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.245644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.245859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.245886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.246075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.246109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.246301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.246327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.246461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.246646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.246680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.246916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.246946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.247082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.247118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.247297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.247361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.247554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.247617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.247896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.248064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.248232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.248413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.248595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.927 [2024-12-09 10:49:37.248777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.927 qpair failed and we were unable to recover it. 00:38:52.927 [2024-12-09 10:49:37.248885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.248910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.249899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.249924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.250109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.250134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.250308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.250348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.250503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.250528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.250689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.250714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.250849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.250874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.251008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.251036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.251168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.251193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.251368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.251394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.251567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.251602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.251823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.251858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.252032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.252057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.252235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.252298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.252576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.252610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.252791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.252817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.253046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.253079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.253322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.253358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.253557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.253583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.253746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.253780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.253969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.254003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.254173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.254199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.254334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.254391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.254652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.254715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.254992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.255238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.255274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.255441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.255480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.255663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.255700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.255873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.255899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.256050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.256114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.256398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.256423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.256636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.256670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.256831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.256858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.928 qpair failed and we were unable to recover it. 00:38:52.928 [2024-12-09 10:49:37.256985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.928 [2024-12-09 10:49:37.257019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.257247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.257464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.257500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.257755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.257781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.257959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.257992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.258175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.258211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.258440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.258466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.258615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.258648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.258808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.259015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.259057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.259283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.259346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.259638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.259701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.259929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.259956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.260148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.260184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.260432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.260478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.260729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.260755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.260938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.261003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.261315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.261379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.261590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.261632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.261844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.261882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.262029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.262071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.262245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.262270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.262398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.262445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.262620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.262655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.262884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.262910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.263084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.263146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.263378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.263413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.263618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.263652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.263816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.263990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.264205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.264410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.264456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.264640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.264673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.264901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.264930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.265098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.265135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.265292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.265634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.265660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.265832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.265890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.266079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.929 [2024-12-09 10:49:37.266112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.929 qpair failed and we were unable to recover it. 00:38:52.929 [2024-12-09 10:49:37.266280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.266306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.266480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.266513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.266682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.266718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.266863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.266889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.267022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.267049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.267200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.267264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.267541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.267565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.267773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.267808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.267961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.267996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.268153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.268179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.268315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.268341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.268489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.268523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.268698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.268729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.268878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.269041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.269104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.269311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.269336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.269475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.269536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.269778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.269805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.269932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.269957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.270109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.270155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.270286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.270319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.270465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.270491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.270582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.270612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.270788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.270853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.271119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.271290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.271324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.271475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.271508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.271615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.271641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.271799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.271826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.272028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.272093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.272287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.272327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.272469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.272536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.272757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.272794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.272942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.272968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.273119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.273163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.273382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.273575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.930 [2024-12-09 10:49:37.273603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.930 qpair failed and we were unable to recover it. 00:38:52.930 [2024-12-09 10:49:37.273735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.273956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.274020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.274304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.274329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.274540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.274677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.274710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.274934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.274960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.275338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.275408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.275712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.275788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.275926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.275952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.276119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.276182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.276365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.276408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.276530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.276556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.276744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.276795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.276998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.277037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.277216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.277253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.277405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.277478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.277745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.277876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.277922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.278067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.278103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.278237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.278465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.278499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.278667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.278701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.278880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.278907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.279003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.279028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.279211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.279275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.279482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.279508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.279632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.279658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.279845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.279879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.280905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.280930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.281054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.281080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.931 qpair failed and we were unable to recover it. 00:38:52.931 [2024-12-09 10:49:37.281241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.931 [2024-12-09 10:49:37.281275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.281406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.281432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.281526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.281551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.281683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.281719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.281908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.281934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.282357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.282420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.282656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.282682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.282848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.282884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.283060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.283129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.283337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.283362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.283487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.283512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.283637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.283695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.283883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.283909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.284026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.284075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.284302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.284365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.284599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.284632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.284769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.284999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.285062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.285264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.285289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.285422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.285451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.285594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.285629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.285811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.285836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.286006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.286068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.286268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.286343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.286495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.286521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.286671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.286697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.286962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.287026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.287262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.287289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.287407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.287454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.287613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.287678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.287895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.287920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.288082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.288115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.932 [2024-12-09 10:49:37.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.932 [2024-12-09 10:49:37.288323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.932 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.288502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.288528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.288689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.288714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.288905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.288931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.289135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.289176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.289337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.289401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.289627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.289690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.289915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.289941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.290040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.290067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.290254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.290287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.290396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.290421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.290574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.290600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.290708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.290831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.291030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.291151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.291400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.291850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.291970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.292140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.292361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.292532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.292707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.292857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.292882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.293124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.293333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.293511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.293659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.293859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.293987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.294012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.294181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.294255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.294472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.294497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.294666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.294698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.294868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.294898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.933 qpair failed and we were unable to recover it. 00:38:52.933 [2024-12-09 10:49:37.295930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.933 [2024-12-09 10:49:37.295965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.296077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.296103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.296300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.296364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.296565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.296629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.296883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.297009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.297056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.297272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.297306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.297469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.297494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.297621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.297647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.297806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.297840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.298071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.298097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.298223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.298526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.298748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.298775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.298913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.298947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.299967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.299996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.300148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.300182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.300353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.300378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.300529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.300674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.300770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.300870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.300896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.301049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.301250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.301436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.301650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.301824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.301980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.302176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.302405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.302622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.302798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.302947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.302988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.303156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.303182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.303310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.934 [2024-12-09 10:49:37.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.934 qpair failed and we were unable to recover it. 00:38:52.934 [2024-12-09 10:49:37.303457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.303488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.303605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.303631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.303754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.303780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.303996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.304173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.304310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.304479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.304727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.304848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.304893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.305062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.305127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.305396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.305422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.305649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.305686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.305894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.305921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.306061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.306087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.306217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.306257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.306459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.306495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.306637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.306813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.306839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.307004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.307052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.307195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.307223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.307356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.307382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.307563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.307627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.307872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.307901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.308074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.308108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.308283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.308491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.308516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.308681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.308716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.308843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.308877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.309208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.309234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.309456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.309541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.309786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.309821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.309961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.309987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.310106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.310132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.310301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.310334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.310459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.310495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.310671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.310718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.310894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.310927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.311079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.311105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.935 [2024-12-09 10:49:37.311315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.935 qpair failed and we were unable to recover it. 00:38:52.935 [2024-12-09 10:49:37.311550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.311614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.311838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.311864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.311988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.312019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.312254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.312289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.312493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.312518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.312714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.312774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.312892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.312918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.313119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.313145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.313299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.313372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.313616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.313679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.313912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.313938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.314042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.314076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.314253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.314294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.314469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.314496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.314625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.314668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.314856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.314891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.315091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.315117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.315284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.315347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.315680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.315772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.315920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.315946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.316066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.316091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.316251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.316285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.316460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.316485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.316620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.316667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.316858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.316892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.317059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.317265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.317330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.317634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.317697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.317931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.317957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.318094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.318126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.318347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.318380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.318593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.318629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.318884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.936 [2024-12-09 10:49:37.318911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.936 qpair failed and we were unable to recover it. 00:38:52.936 [2024-12-09 10:49:37.319048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.319112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.319394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.319542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.319575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.319687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.319729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.319870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.319896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.320966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.320992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.321223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.321287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.321528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.321808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.321835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.322051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.322086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.322235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.322268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.322457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.322481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.322670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.322704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.322913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.322978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.323205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.323230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.323485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.323552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.323867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.324875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.324908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.325038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.325078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.325298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.325332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.325470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.325533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.325802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.325828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.325935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.325961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.326125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.326158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.326352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.326378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.326574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.326607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.937 [2024-12-09 10:49:37.326730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.937 [2024-12-09 10:49:37.326777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.937 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.326930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.327213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.327276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.327616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.327650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.327869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.327895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.328080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.328113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.328292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.328328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.328517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.328550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.328759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.328786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.328951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.328984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.329191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.329389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.329423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.329593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.329627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.329770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.329797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.330023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.330049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.330286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.330349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.330615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.330643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.330820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.330857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.331052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.331085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.331199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.331225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.331474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.331538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.331834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.332014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.332040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.332133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.332159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.332339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.332404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.332598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.332624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.332815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.332850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.333037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.333076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.333273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.333303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.333478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.333517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.333729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.333763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.333914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.333941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.334098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.334161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.334296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.334331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.334457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.334482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.334710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.334778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.334971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.335014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.335160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.335184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.335323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.938 [2024-12-09 10:49:37.335348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.938 qpair failed and we were unable to recover it. 00:38:52.938 [2024-12-09 10:49:37.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.335521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.335691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.335717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.335859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.335908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.336099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.336150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.336348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.336377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.336523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.336565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.336843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.336878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.337046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.337072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.337226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.337275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.337517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.337581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.337804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.337831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.338036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.338220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.338253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.338377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.338402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.338572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.338619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.338811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.338853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.339030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.339061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.339191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.339256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.339494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.339555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.339767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.339793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.339893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.339919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.340267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.340469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.340685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.340871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.340997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.341214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.341252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.341430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.341456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.341627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.341660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.341813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.341849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.342021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.342047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.342182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.342264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.342474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.342533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.342740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.342767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.342894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.342919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.343102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.939 [2024-12-09 10:49:37.343136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.939 qpair failed and we were unable to recover it. 00:38:52.939 [2024-12-09 10:49:37.343277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.343302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.343453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.343480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.343627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.343659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.343797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.343923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.343949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.344128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.344193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.344414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.344564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.344591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.344807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.344874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.345116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.345140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.345272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.345313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.345454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.345486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.345681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.345760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.345925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.345951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.346062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.346328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.346495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.346558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.346839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.346879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.346981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.347009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.347106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.347135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.347305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.347339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.347508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.347534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.347715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.347973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.348225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.348344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.348520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.348812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.348959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.348984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.349134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.349328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.349636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.349860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.349983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.350008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.350167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.350231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.350465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.350492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.350613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.940 [2024-12-09 10:49:37.350656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.940 qpair failed and we were unable to recover it. 00:38:52.940 [2024-12-09 10:49:37.350771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.350806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.350973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.350998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.351174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.351199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.351375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.351409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.351570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.351596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.351718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.351767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.351932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.351966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.352110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.352150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.352285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.352311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.352570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.352634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.352856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.352883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.353921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.353946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.354070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.354095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.354330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.354393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.354585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.354610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.354733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.354760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.354906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.354940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.355097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.355277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.355490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.355658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.355837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.355995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.356029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.356162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.356188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.356336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.356362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.356572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.356636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.356867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.356896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.356993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.357019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.357137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.941 [2024-12-09 10:49:37.357173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.941 qpair failed and we were unable to recover it. 00:38:52.941 [2024-12-09 10:49:37.357352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.357393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.357556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.357589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.357742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.357776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.357932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.357958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.358083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.358109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.358311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.358374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.358601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.358679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.358801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.358830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.358966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.358991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.359121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.359146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.359311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.359346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.359483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.359517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.359661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.359687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.359821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.359848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.360098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.360291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.360371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.360603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.360629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.360751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.361030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.361094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.361295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.361357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.361590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.361792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.362308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.362602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.362628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.362754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.362799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.362982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.363046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.363244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.363309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.363524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.363564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.363751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.363787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.363970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.364036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.364241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.364535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.364731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.364766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.364952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.365017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.365249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.365313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.365512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.365539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.365675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.365701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.942 [2024-12-09 10:49:37.365893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.942 [2024-12-09 10:49:37.365964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.942 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.366197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.366261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.366498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.366538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.366688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.366732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.366914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.367230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.367295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.367522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.367562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.367708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.367809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.367973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.368007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.368254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.368318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.368556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.368622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.368827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.368855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.368983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.369039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.369247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.369311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.369549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.369619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.369859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.369887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.370059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.370129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.370336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.370400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.370650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.370741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.370939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.370965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.371081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.371118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.371317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.371431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.371457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.371618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.371644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.371924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.372101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.372164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.372395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.372420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.372583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.372617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.372764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.372801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.373957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.373983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.374234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.374400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.374433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.374598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.374631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.374804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.374830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.943 qpair failed and we were unable to recover it. 00:38:52.943 [2024-12-09 10:49:37.374932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.943 [2024-12-09 10:49:37.374958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.375114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.375140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.375402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.375465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.375797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.375826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.375949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.375974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.376123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.376156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.376390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.376429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.376665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.376700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.376877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.376906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.377026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.377329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.377633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.377675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.377865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.377931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.378236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.378300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.378600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.378663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.378876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.379034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.379078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.379299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.379334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.379560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.379593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.379765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.379792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.379916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf08570 is same with the state(6) to be set 00:38:52.944 [2024-12-09 10:49:37.380270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.380366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.380671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.380756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.381007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.381035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.381193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.381227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.381418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.381452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.381633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.381660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.381835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.381871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.382204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.382268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.382572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.382598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.382769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.382805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.383024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.383057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.383281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.383310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.383441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.383475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.383697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.383745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.383911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.384128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.384161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.384282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.384430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.384455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.384591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.944 [2024-12-09 10:49:37.384615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.944 qpair failed and we were unable to recover it. 00:38:52.944 [2024-12-09 10:49:37.384783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.384818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.384946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.384972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.385122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.385147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.385411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.385474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.385801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.385828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.385992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.386040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.386163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.386197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.386406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.386446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.386615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.386699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.386841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.386875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.387004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.387030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.387124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.387149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.387284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.387348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.387557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.387623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.387879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.387906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.388042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.388066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.388272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.388295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.388523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.388587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.388837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.388902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.389204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.389227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.389449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.389513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.389800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.389840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.389961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.390122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.390145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.390327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.390391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.390663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.390687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.390834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.390898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.391192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.391256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.391501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.391533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.391790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.391866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.392068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.392375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.392398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.392564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.392631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.392875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.945 [2024-12-09 10:49:37.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.945 qpair failed and we were unable to recover it. 00:38:52.945 [2024-12-09 10:49:37.393115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.393138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.393331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.393396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.393619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.393682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.393912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.393937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.394135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.394200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.394481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.394544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.394822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.394848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.395042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.395105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.395465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.395730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.395756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.395918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.396248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.396311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.396563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.396586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.396781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.396846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.397228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.397517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.397540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.397697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.397777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.398065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.398128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.398371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.398394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.398558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.398621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.398873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.398900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.399039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.399064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.399249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.399312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.399567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.399630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.399890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.399916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.400144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.400208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.400508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.400572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.400848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.400873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.401105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.401171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.401441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.401505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.402045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.402404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.402467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.402764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.402788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.402952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.403024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.403275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.403339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.403590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.403612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.403874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.403939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.404221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.946 [2024-12-09 10:49:37.404285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.946 qpair failed and we were unable to recover it. 00:38:52.946 [2024-12-09 10:49:37.404497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.404521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.404704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.404789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.405019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.405084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.405324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.405348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.405527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.405591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.405814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.405881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.406124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.406148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.406345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.406410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.406580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.406644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.406865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.406890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.407092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.407156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.407457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.407521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.407805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.407829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.407970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.407995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.408256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.408319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.408585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.408900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.408930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.409168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.409231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.409471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.409494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.409755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.409821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.410094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.410157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.410445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.410468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.410691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.410772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.411034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.411099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.411394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.411417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.411619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.411684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.411925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.411990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.412222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.412245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.412491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.412556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.412826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.412891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.413166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.413189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.413356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.413419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.413664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.413745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.413946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.413970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.414191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.414255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.414546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.414609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.414894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.414918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.415088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.415152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.415400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.415465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.947 qpair failed and we were unable to recover it. 00:38:52.947 [2024-12-09 10:49:37.415712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.947 [2024-12-09 10:49:37.415805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.415946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.415980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.416158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.416221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.416454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.416477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.416662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.416744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.416926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.416949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.417139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.417375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.417639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.417702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.417959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.417983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.418094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.418118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.418364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.418427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.418691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.418714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.418875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.418939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.419250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.419313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.419592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.419615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.419836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.419901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.420139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.420468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.420492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.420783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.421004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.421068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.421345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.421368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.421564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.421628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.421838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.421903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.422157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.422181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.422466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.422773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.423087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.423110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.423319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.423382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.423646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.423710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.424033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.424056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.424295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.424358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.424667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.424751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.425024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.425063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.425313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.425376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.425630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.425692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.425994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.426032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.426193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.426256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.426526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.948 [2024-12-09 10:49:37.426589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.948 qpair failed and we were unable to recover it. 00:38:52.948 [2024-12-09 10:49:37.426802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.426827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.427035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.427099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.427399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.427463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.427777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.427801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.427987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.428048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.428382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.428444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.428714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.428809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.429072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.429136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.429376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.429439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.429739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.430046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.430109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.430350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.430414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.430729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.430754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.431006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.431069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.431347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.431411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.431727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.431766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.431974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.432037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.432340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.432403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.432669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.432692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.432903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.432976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.433298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.433362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.433696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.434005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.434069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.434377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.434440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.434712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.434789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.434932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.434957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.435082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.435145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.435432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.435454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.435692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.435773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.436039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.436103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.436378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.436401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.436629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.436916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.436981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.437233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.437256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.437452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.437516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.437821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.437888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.438211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.438361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.438424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.438754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.438819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.949 qpair failed and we were unable to recover it. 00:38:52.949 [2024-12-09 10:49:37.439104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.949 [2024-12-09 10:49:37.439128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.439329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.439694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.439774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.440072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.440095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.440303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.440366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.440652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.440715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.441017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.441041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.441179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.441233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.441525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.441598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.441879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.441904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.442076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.442139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.442436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.442499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.442760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.442783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.443047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.443110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.443410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.443473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.443743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.443767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.443946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.444009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.444303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.444366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.444671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.444694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.444878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.444942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.445188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.445251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.445550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.445573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.445751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.445816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.446104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.446168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.446421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.446444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.446672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.446984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.447048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.447326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.447349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.447493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.447556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.447849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.448030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.448058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.448239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.448302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.448585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.448649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.448980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.449004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.950 [2024-12-09 10:49:37.449204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.950 [2024-12-09 10:49:37.449267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.950 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.449543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.449617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.449927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.449951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.450149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.450212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.450503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.450567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.450863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.450887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.451040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.451104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.451393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.451455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.451773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.451797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.451950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.452015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.452296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.452359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.452630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.452654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.452899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.452965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.453325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.453636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.453658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.453884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.453951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.454202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.454266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.454574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.454777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.454843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.455153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.455218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.455477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.455500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.455670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.455751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.456006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.456222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.456248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.456440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.456504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.456776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.456842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.457173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.457196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.457377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.457441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.457717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.457794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.458114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.458137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.458345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.458408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.458740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.458805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.458985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.459009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.459171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.459194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.459467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.459530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.459769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.459793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.460018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.460081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.460384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.460448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.460792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.951 [2024-12-09 10:49:37.461014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.951 [2024-12-09 10:49:37.461078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.951 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.461380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.461444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.461754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.461778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.461937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.462011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.462299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.462363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.462627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.462650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.462872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.462937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.463194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.463258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.463517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.463539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.463740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.463992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.464016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.464323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.464346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.464593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.464657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.464984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.465048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.465348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.465371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.465602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.465666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.465988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.466052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.466375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.466398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.466561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.466631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.466965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.467030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.467320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.467343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.467505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.467569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.467843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.467908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.468132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.468155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.468289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.468350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.468627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.468690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.468976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.469152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.469496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.469559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.469919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.470212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.470285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.470547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.470617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.470896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.470920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.471126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.471189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.471480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.471770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.471795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.472053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.472117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.472329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.472392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.472676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.472699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.952 [2024-12-09 10:49:37.472901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.952 qpair failed and we were unable to recover it. 00:38:52.952 [2024-12-09 10:49:37.473086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.473150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.473405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.473428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.473678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.473765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.474083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.474146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.474434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.474457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.474614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.474678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.474920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.474984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.475175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.475198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.475373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.475412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.475588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.475651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.475897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.475922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.476060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.476123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.476380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.476443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.476709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.476742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.476929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.476964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.477172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.477240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.477477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.477502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.477688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.477734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.477958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.478023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.478264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.478290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.478421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.478512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.478689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.478732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.478924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.478950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.479113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.479146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.479296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.479330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.479450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.479476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.479634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.479659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.479864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.479897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.480039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.480219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.480439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.480658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.480834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.480998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.481064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.481345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.481370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.481545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.481610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.481817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.481852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.482021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.482048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.482147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.482172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.953 qpair failed and we were unable to recover it. 00:38:52.953 [2024-12-09 10:49:37.482358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.953 [2024-12-09 10:49:37.482393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.482580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.482636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.482920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.482947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.483124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.483196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.483472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.483497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.483687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.483777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.484028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.484064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.484255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.484280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.484566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.484848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.484915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.485155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.485181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.485281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.485307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.485490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.485533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.485754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.485781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.485962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.485995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.486311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.486375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.486693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.486719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.486900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.486936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.487173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.487383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.487412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.487608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.487645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.487897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.487964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.488202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.488242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.488422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.488492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.488669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.488704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.488945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.488971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.489161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.489195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.489480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.489544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.489838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.489865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.490096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.490130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.490318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.490351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.490542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.490569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.490669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.490712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.490885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.490918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.954 [2024-12-09 10:49:37.491110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.954 [2024-12-09 10:49:37.491151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.954 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.491298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.491330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.491508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.491543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.491736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.491782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.491947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.491985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.492229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.492263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.492435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.492474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.492625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.492657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.492822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.492848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.492979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.493018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.493216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.493280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.493584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.493648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.493999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.494234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.494267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.494438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.494480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.494714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.494749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.494910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.494974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.495274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.495338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.495585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.495626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.495809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.495843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.496116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.496181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.496445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.496483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.496650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.496686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.496908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.496975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.497192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.497217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.497383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.497457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.497741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.498045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.498071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.498272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.498306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.498489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.498523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.498835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.498861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.499004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.499073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.499403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.499437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.499579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.499604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.499775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.499842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.500122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.500156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.500329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.500354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.500485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.500531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.500676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.500716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.500913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.955 [2024-12-09 10:49:37.500939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.955 qpair failed and we were unable to recover it. 00:38:52.955 [2024-12-09 10:49:37.501180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.501246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.501504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.501568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.501905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.502154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.502218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.502447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.502807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.502834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.502977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.503022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.503210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.503244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.503426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.503451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.503666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.503698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.503907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.503943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.504131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.504158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.504401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.504465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.504783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.504849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.505151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.505177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.505407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.505441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.505616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.505653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.505803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.505841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.506004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.506069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.506373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.506436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.506698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.506730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.506912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.506947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.507091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.507124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.507324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.507349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.507532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.507565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.507773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.507800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.508015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.508040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.508265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.508476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.508513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.508791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.508818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.508975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.509045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.509321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.509355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.509523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.509548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.509669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.509694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.509866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.509899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.510040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.510066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.510158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.510183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.510367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.510428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.956 qpair failed and we were unable to recover it. 00:38:52.956 [2024-12-09 10:49:37.510728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.956 [2024-12-09 10:49:37.510754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.510914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.510947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.511151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.511184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.511337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.511362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.511511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.511537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.511736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.511774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.511962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.511987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.512171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.512204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.512449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.512482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.512669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.512694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.512893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.512927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.513139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.513204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.513498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.513524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.513694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.513780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.513986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.514232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.514258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.514444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.514483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.514633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.514699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.515001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.515041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.515222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.515287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.515551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.515616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.515910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.515936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.516111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.516174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.516412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.516475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.516805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.516830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.517004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.517042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.517313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.517377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.517688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.517767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.518086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.518149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.518397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.518461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.518802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.518827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.519008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.519080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.519394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.519457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.519714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.519760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.520011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.520074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.520336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.520400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.520697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.520728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.957 qpair failed and we were unable to recover it. 00:38:52.957 [2024-12-09 10:49:37.520937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.957 [2024-12-09 10:49:37.521000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.521264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.521550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.521573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.521695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.521742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.521935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.521999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.522201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.522233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.522357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.522575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.522639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.522984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.523122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.523185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.523493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.523556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.523866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.523891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.524143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.524206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.524485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.524549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.524875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.524899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.525105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.525168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.525417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.525481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.525803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.525828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.526040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.526103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.526388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.526451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.526663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.526687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.526841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.526914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.527142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.527206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.527434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.527457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.527650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.527713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.527965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.528030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.528234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.528257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.528420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.528493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.528775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.528800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.528931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.528956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.529184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.529247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.529540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.529604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.529847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.530380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.530443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.530698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.530746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.958 [2024-12-09 10:49:37.530924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.958 [2024-12-09 10:49:37.530951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.958 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.531168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.531231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.531477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.531500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.531768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.531834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.532140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.532204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.532438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.532471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.532646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.532694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.532894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.532958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.533182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.533205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.533341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.533409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.533611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.533673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.533889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.533919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.534041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.534065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.534328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.534389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.534620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.534642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.534784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.535016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.535081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.535295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.535318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.535510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.535574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.535812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.535838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.535965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.535993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.536130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.536197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.536448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.536510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.536754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.536794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.536955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.537016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.537248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.537312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.537530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.537552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.537699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.537767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.537948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.538010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.538231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.538391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.538430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.538577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.538640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.538833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.538857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.539009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.539054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.959 qpair failed and we were unable to recover it. 00:38:52.959 [2024-12-09 10:49:37.539294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.959 [2024-12-09 10:49:37.539356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.539545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.539567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.539705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.539762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.539927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.539991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.540243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.540383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.540456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.540739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.540805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.541042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.541066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.541238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.541301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.541589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.541653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.541887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.541911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.542113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.542177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.542468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.542532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.542785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.542809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.543001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.543236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.543300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.543526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.543549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.543648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.543671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.543854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.543929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.544169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.544191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.544299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.544322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.544548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.544611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.544821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.544846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.544955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.544980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.545130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.545205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.545437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.545461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.545581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.545643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.545841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.545866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.546005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.546029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.546163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.546200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.546456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.546519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.546792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.546817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.547300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.547363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.547646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.547670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.547846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.547871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.548152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.548215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.548513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.548536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.548690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.548789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.960 [2024-12-09 10:49:37.549050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.960 [2024-12-09 10:49:37.549114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.960 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.549366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.549394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.549602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.549665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.549893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.549958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.550194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.550217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.550388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.550451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.550753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.550828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.551119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.551143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.551288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.551355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.551586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.551649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.551905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.551929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.552119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.552182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.552454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.552517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.552776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.552973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.553240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.553303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.553579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.553642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.553844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.553869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.554119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.554338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.554361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.554533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.554556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.554743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.554771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.554921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.554945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.555943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.556149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.556173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.556332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.556396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.556685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.556767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.556919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.556944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.557089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.557114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.557297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.557319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.557427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.557450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.557598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.557861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.557885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.558029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.558052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.961 [2024-12-09 10:49:37.558235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.961 [2024-12-09 10:49:37.558258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.961 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.558427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.558450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.558578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.558642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.558870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.558895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.559025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.559063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.559250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.559274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.559485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.559628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.559651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:52.962 [2024-12-09 10:49:37.559807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.962 [2024-12-09 10:49:37.559853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:52.962 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.560130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.560193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.560487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.560511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.560800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.560987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.561051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.561328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.561351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.561567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.561630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.561845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.561909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.562161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.562184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.562334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.562357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.562554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.562618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.242 [2024-12-09 10:49:37.562854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.242 [2024-12-09 10:49:37.562879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.242 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.563003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.563044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.563219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.563281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.563569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.563593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.563793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.563817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.563919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.563957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.564164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.564307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.564340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.564446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.564470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.564652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.564716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.564895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.564920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.565943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.565971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.566221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.566245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.566382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.566406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.566648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.566672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.566790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.566813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.566924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.566954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.567086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.567110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.567305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.567327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.567594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.567780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.567843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.568094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.568315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.568378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.568641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.568705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.568970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.569006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.569175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.569237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.569460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.569524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.569785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.569809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.569974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.570290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.570352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.570689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.570712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.570944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.571008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.571252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.571315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.571610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.571774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.571839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.572170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.572233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.572501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.572763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.572829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.243 [2024-12-09 10:49:37.573065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.243 [2024-12-09 10:49:37.573129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.243 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.573380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.573408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.573565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.573589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.573752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.573776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.573920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.573944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.574157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.574221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.574542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.574605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.574800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.574824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.574934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.574957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.575160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.575222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.575520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.575738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.575803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.576043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.576108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.576344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.576367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.576538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.576601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.576858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.576923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.577193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.577217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.577312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.577575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.577638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.577843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.577872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.577996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.578021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.578248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.578457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.578495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.578695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.578775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.579002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.579066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.579295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.579318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.579452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.579765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.579830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.580073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.580097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.580280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.580343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.580616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.580680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.580877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.580903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.581095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.581158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.581446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.581510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.581818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.581843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.581967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.581992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.582245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.244 [2024-12-09 10:49:37.582308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.244 qpair failed and we were unable to recover it. 00:38:53.244 [2024-12-09 10:49:37.582572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.582647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.582894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.582919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.583037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.583304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.583326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.583458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.583481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.583626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.583690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.583925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.583950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.584138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.584202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.584409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.584473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.584729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.584772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.584928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.584996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.585314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.585378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.585637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.585660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.585842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.586230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.586293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.586581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.586609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.586793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.586858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.587116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.587179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.587464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.587487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.587650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.587713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.587957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.588021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.588254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.588277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.588764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.588829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.589045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.589068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.589181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.589204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.589404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.589468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.589663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.589687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.589826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.589851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.590072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.590133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.590397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.590420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.590658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.590741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.590952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.591015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.591271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.591295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.591485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.591548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.591797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.245 [2024-12-09 10:49:37.591862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.245 qpair failed and we were unable to recover it. 00:38:53.245 [2024-12-09 10:49:37.592110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.592134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.592333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.592397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.592673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.592748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.592942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.592966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.593082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.593105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.593323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.593386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.593619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.593653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.593863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.593897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.594078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.594279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.594397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.594630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.594865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.594975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.595000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.595195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.595268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.595481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.595618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.595641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.595797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.595831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.596004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.596029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.596244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.596307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.596572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.596636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.596826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.596851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.596964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.596990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.597196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.597259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.597489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.597512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.597669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.597692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.597858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.597884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.597978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.598003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.598154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.598204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.598497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.598561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.598828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.598853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.599007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.599063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.599347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.599410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.599660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.599683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.599818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.599865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.599997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.600071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.600292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.600314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.600473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.600549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.600807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.246 [2024-12-09 10:49:37.600840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.246 qpair failed and we were unable to recover it. 00:38:53.246 [2024-12-09 10:49:37.600955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.600980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.601169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.601193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.601508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.601571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.601813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.601837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.601962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.602246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.602470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.602492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.602698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.602805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.602935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.602968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.603098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.603137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.603294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.603318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.603556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.603620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.603877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.603901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.604013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.604053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.604274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.604336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.604672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.604757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.604939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.604965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.605129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.605191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.605400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.605646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.605910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.605942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.606079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.606322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.606385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.606681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.606772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.606914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.606940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.607070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.607093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.607252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.607314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.607613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.607636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.607858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.607892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.608055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.608117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.608342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.247 [2024-12-09 10:49:37.608365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.247 qpair failed and we were unable to recover it. 00:38:53.247 [2024-12-09 10:49:37.608522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.608592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.608808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.608841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.608968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.608991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.609135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.609159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.609416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.609478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.609667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.609701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.609850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.609875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.610105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.610165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.610426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.610547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.610598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.610805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.610838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.611048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.611234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.611307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.611518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.611581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.611782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.611807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.611940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.611966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.612157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.612218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.612466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.612488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.612647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.612708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.612891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.612923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.613116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.613154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.613337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.613401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.613646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.613710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.613885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.613910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.614058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.614097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.614260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.614322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.614573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.614596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.614715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.614746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.614853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.614885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.615000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.615023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.615179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.615203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.615397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.615460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.615805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.615831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.615952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.616138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.616202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.616436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.616459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.616627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.616688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.616872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.616896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.248 [2024-12-09 10:49:37.617002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.248 [2024-12-09 10:49:37.617042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.248 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.617187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.617421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.617485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.617715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.617761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.617887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.617912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.618040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.618065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.618224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.618248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.618508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.618575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.618828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.618862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.619151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.619333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.619538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.619717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.619849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.619894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.620940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.620972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.621188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.621211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.621388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.621411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.621599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.621662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.621850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.621876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.621970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.621995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.622180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.622244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.622447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.622470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.622603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.622628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.622773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.623041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.623081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.623220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.623425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.623488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.623753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.623779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.623881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.623949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.624168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.624231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.624453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.624476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.624608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.249 [2024-12-09 10:49:37.624633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.249 qpair failed and we were unable to recover it. 00:38:53.249 [2024-12-09 10:49:37.624871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.624937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.625174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.625197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.625383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.625447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.625646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.625710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.625944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.626088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.626113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.626324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.626388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.626591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.626842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.627071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.627134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.627357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.627380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.627556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.627619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.627835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.627861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.627961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.627995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.628136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.628175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.628644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.628666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.628793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.628818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.628980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.629045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.629291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.629314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.629513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.629576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.629839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.630074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.630206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.630229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.630453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.630517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.630742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.630793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.630891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.630935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.631136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.631198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.631414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.631437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.631576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.631621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.631820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.631885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.632172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.632196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.632365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.632427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.632615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.632677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.632903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.632928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.633060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.633100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.633251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.633314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.633558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.633582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.633745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.633789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.250 qpair failed and we were unable to recover it. 00:38:53.250 [2024-12-09 10:49:37.633979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.250 [2024-12-09 10:49:37.634042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.634292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.634316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.634481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.634546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.634766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.634832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.635068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.635092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.635260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.635325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.635545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.635618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.635861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.635887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.636003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.636028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.636189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.636251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.636469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.636492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.636640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.636714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.636918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.636980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.637174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.637198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.637320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.637345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.637521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.637583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.637785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.637811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.637940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.637965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.638190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.638251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.638471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.638495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.638641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.638665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.638851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.638877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.638968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.638993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.639091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.639116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.639244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.639308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.639476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.639499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.639667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.639840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.639905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.640094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.640131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.640267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.640290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.640462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.640523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.640744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.640784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.640877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.640902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.641068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.641360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.641384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.641571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.251 [2024-12-09 10:49:37.641796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.251 [2024-12-09 10:49:37.641859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.251 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.642084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.642299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.642576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.642637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.642838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.642862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.643022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.643046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.643236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.643300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.643540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.643564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.643783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.643849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.644102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.644165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.644393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.644417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.644624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.644688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.644882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.644946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.645165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.645189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.645329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.645405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.645656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.645738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.645916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.645943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.646113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.646249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.646312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.646531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.646555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.646737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.646799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.647036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.647098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.647305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.647329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.647535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.647599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.647846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.647911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.648169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.648193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.648391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.648465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.648647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.648710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.648929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.648955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.649126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.649172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.649394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.649457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.649730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.649757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.649841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.649899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.650094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.650157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.650397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.650421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.650583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.650647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.650919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.252 [2024-12-09 10:49:37.651148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.252 [2024-12-09 10:49:37.651172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.252 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.651316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.651389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.651627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.651691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.651898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.651925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.652059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.652084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.652253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.652646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.652833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.652859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.653093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.653156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.653424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.653447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.653594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.653657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.653849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.653875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.653960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.653986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.654124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.654148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.654401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.654464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.654734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.654769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.654873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.654934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.655215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.655279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.655495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.655519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.655656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.655709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.655926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.656248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.656457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.656527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.656707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.656811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.657024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.657059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.657203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.657241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.657466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.657537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.657783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.657809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.657905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.657935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.658209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.658272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.658475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.658498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.658706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.658786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.658990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.659050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.659289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.659313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.659477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.659541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.659735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.659810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.660070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.660110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.660277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.660340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.660514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.660577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.253 [2024-12-09 10:49:37.660796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.253 [2024-12-09 10:49:37.660823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.253 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.660924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.660949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.661113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.661177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.661389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.661413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.661547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.661616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.661821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.661847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.662045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.662068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.662213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.662285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.662598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.662823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.662848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.662992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.663057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.663284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.663353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.663532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.663555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.663697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.663745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.663910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.663973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.664207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.664232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.664436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.664499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.664797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.664864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.665177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.665201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.665406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.665469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.665754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.665829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.666081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.666272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.666336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.666530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.666593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.666819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.666845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.666991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.667031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.667238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.667300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.667475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.667513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.667657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.667680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.667857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.668132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.668170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.668277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.668476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.668536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.668750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.668775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.668920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.669207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.669268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.669499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.669522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.669791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.669815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.669925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.669949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.670141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.254 [2024-12-09 10:49:37.670165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.254 qpair failed and we were unable to recover it. 00:38:53.254 [2024-12-09 10:49:37.670360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.670423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.670625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.670688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.670911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.670936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.671056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.671079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.671327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.671392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.671629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.671652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.671797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.671865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.672117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.672180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.672393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.672416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.672541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.672565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.672702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.672806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.673013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.673037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.673182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.673230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.673468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.673530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.673757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.673781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.673918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.673991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.674233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.674296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.674492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.674515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.674653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.674678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.674863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.674929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.675153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.675176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.675287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.675311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.675495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.675556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.675778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.675802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.675923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.675949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.676151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.676214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.676442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.676466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.676573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.676598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.676796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.676836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.676926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.676950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.677131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.677155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.677492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.677556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.677777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.677801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.677916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.677940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.678149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.678211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.678433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.678456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.678598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.678663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.678853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.678916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.679140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.255 [2024-12-09 10:49:37.679164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.255 qpair failed and we were unable to recover it. 00:38:53.255 [2024-12-09 10:49:37.679306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.679377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.679612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.679676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.679886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.679911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.680105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.680169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.680418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.680481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.680654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.680678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.680826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.680851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.681029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.681092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.681326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.681350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.681552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.681615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.681853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.681918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.682198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.682222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.682430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.682493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.682702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.682782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.682974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.682998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.683131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.683155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.683396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.683459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.683712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.683795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.683921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.683945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.684104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.684174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.684436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.684621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.684682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.684896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.684921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.685075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.685114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.685307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.685370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.685576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.685639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.685831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.685856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.685984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.686008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.686227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.686287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.686535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.686558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.686741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.686805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.686989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.687052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.687270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.687294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.687472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.256 [2024-12-09 10:49:37.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.256 qpair failed and we were unable to recover it. 00:38:53.256 [2024-12-09 10:49:37.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.687860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.688092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.688284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.688348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.688593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.688655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.688845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.688870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.688990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.689014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.689160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.689223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.689439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.689463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.689608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.689664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.689872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.689936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.690190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.690214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.690413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.690699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.690777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.690992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.691017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.691221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.691284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.691490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.691553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.691818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.691966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.691992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.692187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.692248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.692477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.692501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.692682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.692763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.692895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.692919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.693044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.693068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.693251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.693577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.693640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.693856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.693881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.694029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.694091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.694325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.694560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.694584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.694716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.694768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.694957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.695018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.695250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.695273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.695405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.695481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.695648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.695709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.695904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.695929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.696073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.696098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.696322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.696385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.696614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.696638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.696787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.257 [2024-12-09 10:49:37.696851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.257 qpair failed and we were unable to recover it. 00:38:53.257 [2024-12-09 10:49:37.697072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.697137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.697301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.697330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.697484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.697508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.697648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.697711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.697979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.698004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.698182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.698246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.698538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.698601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.698832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.698857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.698975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.699000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.699264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.699327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.699616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.699680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.699932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.699957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.700202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.700265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.700476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.700509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.700651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.700739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.700909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.700934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.701111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.701134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.701359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.701419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.701664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.701763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.702047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.702073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.702291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.702355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.702706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.702936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.702961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.703127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.703188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.703404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.703467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.703681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.703704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.703904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.703968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.704234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.704296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.704512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.704536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.704777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.704843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.705147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.705210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.705501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.705763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.705828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.706024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.706088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.706362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.706608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.706671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.706932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.706998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.707234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.707258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.707421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.258 [2024-12-09 10:49:37.707484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.258 qpair failed and we were unable to recover it. 00:38:53.258 [2024-12-09 10:49:37.707714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.707794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.708020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.708044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.708251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.708314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.708622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.708686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.708944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.708969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.709198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.709261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.709569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.709632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.709883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.709908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.710089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.710154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.710409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.710473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.710741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.710804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.711002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.711067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.711374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.711438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.711767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.711810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.711989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.712066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.712367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.712430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.712773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.712804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.712952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.712978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.713168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.713232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.713462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.713486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.713645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.713718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.713957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.714020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.714356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.714379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.714568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.714640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.714915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.714981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.715253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.715277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.715431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.715495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.715778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.716113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.716138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.716378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.716443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.716750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.716797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.716920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.716945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.717176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.717240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.717537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.717600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.717853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.717878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.718071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.718135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.259 [2024-12-09 10:49:37.718378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.259 [2024-12-09 10:49:37.718441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.259 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.718710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.718756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.718927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.718992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.719249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.719312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.719626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.719650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.719843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.719909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.720214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.720278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.720586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.720613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.720822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.720887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.721191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.721256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.721496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.721519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.721733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.721799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.722050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.722422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.722446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.722636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.722699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.722904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.722969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.723259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.723282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.723529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.723592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.723821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.723887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.724186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.724210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.724462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.724525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.724809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.724834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.724992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.725017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.725199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.725262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.725612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.725675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.725964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.725990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.726201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.726264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.726578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.726642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.726933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.726960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.727144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.727208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.727465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.727499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.727664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.727689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.727856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.727909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.728137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.728170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.728343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.728368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.728566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.728600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.728821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.260 [2024-12-09 10:49:37.728855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.260 qpair failed and we were unable to recover it. 00:38:53.260 [2024-12-09 10:49:37.729073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.729099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.729334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.729368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.729596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.729633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.729809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.729835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.729956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.730213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.730276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.730579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.730606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.730808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.730845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.731021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.731065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.731294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.731334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.731497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.731539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.731780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.731811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.732021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.732046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.732329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.732362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.732629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.732662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.732906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.732934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.733085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.733118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.733319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.733389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.733657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.733681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.733874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.733944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.734202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.734412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.734444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.734548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.734573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.734718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.734760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.734959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.735190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.735254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.735574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.735637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.735969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.735995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.736196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.736230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.736406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.736643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.736672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.736791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.736857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.737086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.737149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.737349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.737373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.261 qpair failed and we were unable to recover it. 00:38:53.261 [2024-12-09 10:49:37.737518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.261 [2024-12-09 10:49:37.737544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.737680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.737901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.737928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.738088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.738120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.738299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.738337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.738552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.738670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.738745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.738970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.739032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.739280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.739309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.739517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.739698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.739740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.739894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.739920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.740092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.740138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.740390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.740453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.740801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.741043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.741080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.741284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.741318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.741486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.741671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.741706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.741868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.741893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.742029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.742054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.742231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.742264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.742451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.742488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.742637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.742663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.742796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.742850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.743029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.743063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.743236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.743262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.743391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.743436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.743555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.743587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.743745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.743772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.744942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.745192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.262 [2024-12-09 10:49:37.745443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.262 [2024-12-09 10:49:37.745477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.262 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.745590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.745618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.745967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.746202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.746231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.746439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.746622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.746656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.746805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.746831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.746970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.746999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.747186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.747222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.747378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.747404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.747569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.747612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.747761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.747795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.747984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.748010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.748172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.748206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.748349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.748381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.748657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.748682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.748829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.748875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.749083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.749119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.749332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.749357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.749462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.749508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.749679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.749712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.749921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.749946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.750100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.750148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.750332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.750366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.750517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.750543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.750676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.750703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.750905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.750939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.751147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.751173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.751338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.751372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.751611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.751645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.751870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.751897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.752009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.752216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.752252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.752505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.752758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.752988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.753020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.753225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.753251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.753422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.753455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.263 qpair failed and we were unable to recover it. 00:38:53.263 [2024-12-09 10:49:37.753666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.263 [2024-12-09 10:49:37.753700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.753914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.753940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.754064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.754097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.754244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.754276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.754455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.754490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.754718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.754795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.754968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.755143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.755273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.755542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.755765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.755956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.755981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.756240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.756274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.756527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.756704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.756746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.756974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.757206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.757415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.757799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.757948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.757973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.758172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.758205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.758358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.758382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.758521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.758548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.758737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.759936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.759970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.760090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.760124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.760253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.760279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.760441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.760473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.760711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.760761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.760929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.760968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.761134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.761166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.761355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.761389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.761540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.761576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.264 qpair failed and we were unable to recover it. 00:38:53.264 [2024-12-09 10:49:37.761699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.264 [2024-12-09 10:49:37.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.761914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.761940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.762131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.762165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.762365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.762399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.762641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.762674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.762884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.762911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.763132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.763323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.763348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.763525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.763557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.763731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.763777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.763941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.763965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.764967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.764993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.765229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.765262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.765424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.765457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.765629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.765656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.765815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.765867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.766141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.766175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.766319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.766345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.766517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.766563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.766713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.766756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.766930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.766954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.767083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.767128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.767326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.767360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.767605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.767645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.767819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.767854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.768096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.768181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.768476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.768502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.768772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.768807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.769044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.769081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.769259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.265 [2024-12-09 10:49:37.769287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.265 qpair failed and we were unable to recover it. 00:38:53.265 [2024-12-09 10:49:37.769525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.769589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.769884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.770055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.770080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.770237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.770262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.770434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.770467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.770738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.770943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.771193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.771255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.771520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.771547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.771715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.771791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.772056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.772120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.772428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.772454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.772633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.772717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.773042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.773106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.773440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.773465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.773670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.773757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.774064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.774128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.774444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.774470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.774652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.774686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.774955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.775021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.775283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.775312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.775472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.775506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.775756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.775822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.776087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.776112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.776360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.776393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.776539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.776604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.776928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.776955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.777125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.777158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.777337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.777372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.777606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.777639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.777868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.777902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.778087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.778177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.778370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.778396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.778495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.778520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.778685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.778780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.778973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.779003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.779197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.266 [2024-12-09 10:49:37.779261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.266 qpair failed and we were unable to recover it. 00:38:53.266 [2024-12-09 10:49:37.779578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.779613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.779757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.779784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.780032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.780249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.780282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.780406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.780436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.780591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.780616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.780893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.780921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.781154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.781181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.781342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.781405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.781650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.782004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.782167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.782226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.782508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.782572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.782849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.782875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.783040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.783074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.783253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.783317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.783588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.783627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.783910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.783944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.784186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.784251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.784518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.784542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.784688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.784729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.784871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.784904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.785108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.785149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.785357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.785420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.785684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.785910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.785935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.786116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.786149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.786294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.786327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.786492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.786518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.786613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.786639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.786800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.786864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.787191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.787217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.787395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.787429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.787676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.787916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.787941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.788045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.788093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.788325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.267 [2024-12-09 10:49:37.788512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.267 [2024-12-09 10:49:37.788537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.267 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.788683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.788719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.788889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.788915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.789046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.789072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.789153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.789179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.789395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.789458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.789739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.789766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.789915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.789951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.790125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.790158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.790380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.790404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.790585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.790657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.790896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.790929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.791131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.791156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.791249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.791275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.791425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.791462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.791659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.791684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.791799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.792157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.792191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.792479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.792505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.792685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.792718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.792839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.792875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.793073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.793098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.793277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.793339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.793547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.793608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.793921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.793948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.794135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.794216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.794517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.794580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.794891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.794918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.795096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.795160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.795474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.795538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.795857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.795883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.795987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.796020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.796229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.796293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.796552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.796772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.796806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.796993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.797029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.797244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.797268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.797425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.268 [2024-12-09 10:49:37.797458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.268 qpair failed and we were unable to recover it. 00:38:53.268 [2024-12-09 10:49:37.797630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.797915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.797940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.798136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.798199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.798376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.798409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.798577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.798609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.798747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.798792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.798976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.799024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.799182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.799208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.799438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.799584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.799619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.799779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.799804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.800000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.800063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.800331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.800365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.800587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.800612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.800787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.800821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.801011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.801048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.801237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.801263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.801466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.801529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.801853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.802243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.802428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.802508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.802809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.802844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.802992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.803017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.803170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.803218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.803392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.803460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.803750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.803775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.803921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.803954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.804099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.804132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.804432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.804462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.804691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.804734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.804925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.804958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.805186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.805211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.805367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.805400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.805588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.805659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.805966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.805993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.806160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.269 [2024-12-09 10:49:37.806223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.269 qpair failed and we were unable to recover it. 00:38:53.269 [2024-12-09 10:49:37.806524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.806582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.806805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.806832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.806950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.806977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.807159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.807191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.807387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.807411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.807539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.807581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.807783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.808077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.808103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.808277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.808340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.808607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.808653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.808841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.808868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.808989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.809032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.809168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.809200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.809374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.809632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.809696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.809955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.810018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.810300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.810326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.810497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.810531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.810734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.810778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.810993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.811025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.811329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.811398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.811677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.811758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.812067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.812108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.812265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.812299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.812515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.812579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.812870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.812896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.813144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.813178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.813418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.813482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.813788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.813814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.813992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.814027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.814189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.814258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.814511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.814536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.814641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.814736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.814905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.814931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.270 qpair failed and we were unable to recover it. 00:38:53.270 [2024-12-09 10:49:37.815065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.270 [2024-12-09 10:49:37.815090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.815265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.815329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.815553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.815587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.815761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.815797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.815971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.816004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.816185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.816217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.816398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.816425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.816619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.816682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.817009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.817073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.817405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.817431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.817655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.817689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.817941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.817975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.818118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.818143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.818269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.818294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.818526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.818590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.818880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.818906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.819077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.819125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.819309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.819342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.819571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.819596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.819804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.819945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.819977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.820254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.820515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.820571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.820733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.820767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.820985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.821012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.821211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.821243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.821473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.821513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.821766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.821793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.821950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.821976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.822190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.822223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.822390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.822415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.822585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.822618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.822796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.822828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.823002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.271 [2024-12-09 10:49:37.823029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.271 qpair failed and we were unable to recover it. 00:38:53.271 [2024-12-09 10:49:37.823161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.823219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.823470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.823534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.823846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.823873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.824042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.824075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.824270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.824303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.824446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.824471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.824629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.824677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.824893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.824966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.825260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.825285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.825447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.825524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.825759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.825793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.826008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.826205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.826239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.826433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.826610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.826651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.826820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.826855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.827884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.827925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.828065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.828097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.828243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.828268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.828405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.828430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.828624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.828657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.828881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.829059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.829092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.829205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.829238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.829414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.829441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.829617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.829653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.829846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.829872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.830036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.830061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.830211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.830404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.830437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.830615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.830643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.830840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.831011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.831044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.272 qpair failed and we were unable to recover it. 00:38:53.272 [2024-12-09 10:49:37.831175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.272 [2024-12-09 10:49:37.831214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.831385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.831429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.831604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.831637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.831799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.831825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.832870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.833963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.834135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.834169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.834396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.834429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.834600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.834624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.834768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.835042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.835268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.835293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.835439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.835476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.835663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.835943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.835968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.836077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.836138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.836390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.836453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.836756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.836782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.836899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.836932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.837068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.837130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.837350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.837373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.837583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.837647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.837949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.838015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.838290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.838314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.838505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.838569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.838863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.838929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.273 [2024-12-09 10:49:37.839200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.273 [2024-12-09 10:49:37.839224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.273 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.839436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.839499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.839777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.839802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.839926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.839950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.840126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.840149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.840306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.840367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.840645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.840911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.840975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.841264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.841327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.841619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.841643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.841832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.841896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.842204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.842267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.842540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.842564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.842741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.842804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.843054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.843118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.843413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.843437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.843696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.843777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.844082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.844146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.844393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.844423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.844614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.844677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.844990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.845350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.845374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.845548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.845612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.845863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.845929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.846238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.846261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.846456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.846519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.846806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.846872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.847167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.847191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.847394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.847458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.847785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.847810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.847935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.847960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.848154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.848217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.848588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.848850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.848875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.849039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.849103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.849356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.849418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.849716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.849760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.849968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.850032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.850340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.850403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.274 [2024-12-09 10:49:37.850669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.274 [2024-12-09 10:49:37.850693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.274 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.850842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.850919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.851226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.851290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.851562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.851586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.851754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.851819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.852075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.852138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.852470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.852494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.852688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.852770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.853043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.853399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.853422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.853609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.853673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.854000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.854063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.854325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.854349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.854485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.854548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.854789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.854863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.855081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.855109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.855304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.855367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.855679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.855778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.856081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.856105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.856286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.856349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.856639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.856701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.856988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.857202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.857266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.857559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.857621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.857935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.857960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.858187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.858250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.858556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.858618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.858903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.858928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.859190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.859253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.859571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.859635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.859853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.859879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.275 qpair failed and we were unable to recover it. 00:38:53.275 [2024-12-09 10:49:37.860109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.275 [2024-12-09 10:49:37.860183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.860470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.860533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.860830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.861032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.861095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.861344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.861408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.861656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.861680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.861889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.861955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.862237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.862543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.862567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.862783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.862846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.863140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.863203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.863524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.863787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.863852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.864118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.864181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.864410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.864434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.864566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.864622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.864962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.865027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.865361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.865501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.865565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.865887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.865952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.866199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.866222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.866408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.866471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.866735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.866800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.866970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.867154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.867179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.867428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.867493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.867792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.867818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.867951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.868014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.868270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.868340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.868572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.868595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.868792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.868856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.869074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.869137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.869346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.869369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.869511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.869555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.869794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.869822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.869951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.869976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.870157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.870221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.870439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.870502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.870757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.276 [2024-12-09 10:49:37.870783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.276 qpair failed and we were unable to recover it. 00:38:53.276 [2024-12-09 10:49:37.870903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.870934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.871063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.871089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.871287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.871311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.871514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.871577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.871842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.871908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.872125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.872149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.872288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.872313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.872474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.872520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.872704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.872750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.872912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.872967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.873274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.873341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.873542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.873566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.873744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.873991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.874076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.874282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.874453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.874476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.874598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.874638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.874786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.874810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.874930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.875008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.875216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.875279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.875499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.875523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.875665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.875688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.875867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.875892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.277 [2024-12-09 10:49:37.876042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.277 [2024-12-09 10:49:37.876067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.277 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.876260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.876322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.876589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.876653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.876856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.876881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.877042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.877106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.877357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.877421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.877710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.877769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.877918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.877982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.878204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.878267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.878501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.878525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.878693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.878718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.878885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.878909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.879929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.879968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.880130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.880153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.880330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.880354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.880574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.880711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.880742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.880885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.880913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.881048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.881076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.881252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.881280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.881495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.881524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.881685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.881712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.881831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.881859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.556 [2024-12-09 10:49:37.882034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.556 qpair failed and we were unable to recover it. 00:38:53.556 [2024-12-09 10:49:37.882221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.882284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.882624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.882688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.882926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.882959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.883179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.883243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.883520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.883790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.883819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.883991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.884055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.884350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.884377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.884589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.884651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.884885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.884950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.885180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.885208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.885419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.885483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.885747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.885812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.886063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.886097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.886344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.886587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.886650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.886914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.887092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.887156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.887418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.887481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.887695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.887741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.887878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.887942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.888244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.888307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.888559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.888588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.888749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.888824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.889071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.889338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.889367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.889518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.889581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.889841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.890210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.890238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.557 [2024-12-09 10:49:37.890431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.557 [2024-12-09 10:49:37.890505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.557 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.890819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.890885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.891174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.891202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.891368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.891431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.891677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.891755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.891979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.892006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.892193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.892557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.892620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.892858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.892887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.893035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.893106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.893397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.893460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.893688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.893716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.893918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.893981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.894275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.894338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.894645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.894708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.894878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.894906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.895110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.895174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.895380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.895407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.895602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.895665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.895949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.896013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.896309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.896337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.896595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.896659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.896917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.897238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.897484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.897547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.897810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.897876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.898131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.898159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.898304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.898367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.898639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.898702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.898943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.898972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.899148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.899211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.899448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.899511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.899785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.558 [2024-12-09 10:49:37.899814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.558 qpair failed and we were unable to recover it. 00:38:53.558 [2024-12-09 10:49:37.899962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.900025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.900337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.900400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.900661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.900689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.900867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.900932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.901174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.901238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.901529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.901556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.901715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.901801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.901965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.902031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.902300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.902333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.902481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.902544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.902791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.902857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.903153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.903180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.903436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.903498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.903705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.903804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.904071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.904099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.904281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.904343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.904568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.904631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.904970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.904999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.905208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.905271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.905559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.905622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.905881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.905909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.906103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.906166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.906529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.906785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.906813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.907012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.907076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.907314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.907377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.907682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.907710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.907901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.907965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.908276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.908340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.908639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.908667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.908965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.909029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.909332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.909395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.909678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.909767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.909920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.909948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.910238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.910302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.910581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.910654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.910949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.559 [2024-12-09 10:49:37.911022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.559 qpair failed and we were unable to recover it. 00:38:53.559 [2024-12-09 10:49:37.911322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.911386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.911696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.911796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.912008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.912071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.912307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.912371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.912618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.912646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.912808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.912883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.913145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.913208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.913515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.913543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.913763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.913827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.914078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.914143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.914394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.914422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.914619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.914683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.915032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.915112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.915404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.915433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.915602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.915665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.915932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.915961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.916159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.916195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.916353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.916421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.916700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.916781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.917050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.917078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.917326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.917390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.917672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.917752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.918065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.918093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.918291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.918355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.918636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.918699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.918969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.918997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.919441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.919504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.919805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.919835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.920027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.920090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.920345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.920407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.920699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.920944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.921007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.921277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.921339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.921654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.921682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.921969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.922033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.560 [2024-12-09 10:49:37.922333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.560 [2024-12-09 10:49:37.922395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.560 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.922715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.922749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.923062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.923127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.923421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.923495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.923787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.923816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.924013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.924077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.924344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.924407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.924735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.924800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.924907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.924934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.925181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.925244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.925470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.925498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.925659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.925737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.926057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.926121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.926431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.926773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.927027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.927090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.927389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.927417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.927677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.927771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.928011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.928347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.928376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.928573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.928635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.928929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.928994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.929250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.929279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.929517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.929579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.929807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.929837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.929995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.930023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.930200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.930261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.930557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.930620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.930907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.930936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.931102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.931165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.931442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.931505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.931840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.931870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.932088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.561 [2024-12-09 10:49:37.932150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.561 qpair failed and we were unable to recover it. 00:38:53.561 [2024-12-09 10:49:37.932384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.932747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.932807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.933050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.933114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.933322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.933385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.933661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.933689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.933882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.933910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.934093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.934121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.934298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.934325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.934489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.934549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.934800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.934865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.935066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.935094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.935275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.935303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.935487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.935548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.935854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.935883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.936005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.936068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.936378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.936442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.936702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.936792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.937008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.937072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.937354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.937417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.937750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.937802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.937947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.937993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.938218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.938282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.938570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.938598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.938774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.938839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.939104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.939167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.939473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.939501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.939682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.939762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.940040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.940103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.940367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.940395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.940576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.940639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.940893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.940959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.941263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.941291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.941538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.941600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.941858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.941922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.942190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.942333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.942685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.942767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.562 qpair failed and we were unable to recover it. 00:38:53.562 [2024-12-09 10:49:37.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.562 [2024-12-09 10:49:37.943053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.943247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.943320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.943627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.943690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.943980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.944008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.944197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.944260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.944477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.944540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.944837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.944866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.945062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.945126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.945349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.945411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.945742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.945807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.946030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.946094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.946393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.946456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.946924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.947194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.947258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.947568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.947632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.947944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.947973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.948168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.948231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.948552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.948615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.948900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.948929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.949063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.949126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.949376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.949439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.949690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.949733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.949882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.949945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.950176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.950238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.950504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.950532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.950701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.950890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.950918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.951076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.951104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.951279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.951342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.951610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.951674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.951935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.951963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.952176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.952238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.952543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.952607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.952874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.952903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.953088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.953151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.563 [2024-12-09 10:49:37.953483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.563 [2024-12-09 10:49:37.953545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.563 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.953840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.953869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.954080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.954144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.954409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.954472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.954741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.954769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.954944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.955007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.955240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.955302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.955556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.955584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.955832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.955897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.956245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.956515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.956543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.956698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.956777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.957084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.957147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.957458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.957486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.957714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.957794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.957999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.958269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.958297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.958482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.958544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.958810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.958875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.959181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.959209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.959406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.959468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.959746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.959811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.960111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.960139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.960377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.960440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.960746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.960810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.961121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.961150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.961365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.961428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.961785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.962097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.962125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.962368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.962432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.962674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.962753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.963034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.963062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.963254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.963318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.963593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.963656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.963935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.963969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.964210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.964273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.964483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.964545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.964809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.964837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.564 [2024-12-09 10:49:37.964937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.564 [2024-12-09 10:49:37.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.564 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.965307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.965370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.965637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.965700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.965910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.965938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.966171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.966234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.966542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.966570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.966835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.966899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.967165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.967227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.967479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.967507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.967693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.967781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.968105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.968169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.968472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.968500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.968656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.968715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.969030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.969094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.969404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.969432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.969707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.970094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.970157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.970385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.970413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.970564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.970628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.970943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.971007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.971259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.971287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.971444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.971506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.971765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.971829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.972141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.972169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.972480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.972544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.972826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.972855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.973032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.973060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.973280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.973343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.973647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.973710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.973982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.974011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.974160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.974221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.974519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.974583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.974848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.974877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.975021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.975085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.975435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.975657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.975684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.565 [2024-12-09 10:49:37.975887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.565 [2024-12-09 10:49:37.975949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.565 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.976191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.976262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.976547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.976575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.976777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.976843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.977130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.977195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.977502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.977530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.977751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.977817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.978034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.978070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.978283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.978311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.978470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.978504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.978713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.978753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.978925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.978957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.979163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.979199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.979413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.979447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.979687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.979726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.979916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.979945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.980106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.980140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.980293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.980321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.980537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.980571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.980818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.980853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.980985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.981014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.981200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.981235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.981414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.981448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.981609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.981637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.981782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.981826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.982032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.982097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.982360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.982389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.982562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.982624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.982914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.982948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.983082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.983115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.983305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.983339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.983528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.983562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.983742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.983780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.983900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.983933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.984107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.984141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.984290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.984319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.984459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.984507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.984744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.566 [2024-12-09 10:49:37.984778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.566 qpair failed and we were unable to recover it. 00:38:53.566 [2024-12-09 10:49:37.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.985030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.985218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.985252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.985465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.985499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.985693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.985740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.985973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.986011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.986149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.986182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.986390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.986418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.986654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.986688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.986848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.986885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.987081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.987282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.987458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.987841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.987991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.988030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.988234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.988263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.988401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.988546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.988579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.988764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.988794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.989047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.989117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.989421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.989484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.989800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.989828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.990014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.990253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.990464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.990492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.990611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.990645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.990858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.990893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.991095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.991124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.991361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.991667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.991759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.991960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.991989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.992132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.992171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.992313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.992349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.992488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.992516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.992650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.992698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.992898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.992970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.993219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.993248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.993377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.993442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.993744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.567 [2024-12-09 10:49:37.993778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.567 qpair failed and we were unable to recover it. 00:38:53.567 [2024-12-09 10:49:37.993909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.993938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.994149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.994184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.994330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.994363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.994520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.994555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.994751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.994797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.994920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.994949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.995200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.995228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.995462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.995526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.995860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.996212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.996407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.996471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.996682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.996716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.996928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.997129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.997162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.997288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.997321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.997467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.997496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.997596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.997625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.997911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.997946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.998146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.998174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.998358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.998435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.998612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.998646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.998875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.998905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.999057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.999120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.999411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.999594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.999623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:37.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:37.999838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.000030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.000095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.000326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.000357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.000506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.000541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.000677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.000763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.000987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.001017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.001178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.001242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.001493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.001527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.001716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.001924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.001953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.002176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.002240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.002443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.002471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.002565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.002611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.002792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.002822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.568 qpair failed and we were unable to recover it. 00:38:53.568 [2024-12-09 10:49:38.003001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.568 [2024-12-09 10:49:38.003030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.003186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.003247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.003450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.003529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.003749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.003881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.003949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.004176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.004210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.004384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.004571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.004635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.004906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.004941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.005099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.005128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.005273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.005319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.005530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.005595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.005816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.005845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.006000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.006063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.006249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.006283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.006505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.006534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.006731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.006765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.007010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.007044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.007346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.007375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.007547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.007937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.007973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.008181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.008215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.008421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.008454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.008649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.008684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.008851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.008881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.009040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.009103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.009396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.009460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.009768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.009800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.010037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.010071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.010283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.010510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.010538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.010674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.010708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.010943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.010972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.011270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.011466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.011548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.011748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.012019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.012194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.012227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.012403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.012437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.569 [2024-12-09 10:49:38.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.569 [2024-12-09 10:49:38.012627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.569 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.012868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.012902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.013038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.013074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.013290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.013490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.013524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.013726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.013761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.013963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.013994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.014240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.014672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.014705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.014855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.014910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.015198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.015262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.015563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.015591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.015736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.015771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.015945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.015979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.016170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.016198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.016330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.016363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.016615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.016741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.016774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.016984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.017050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.017411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.017691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.017729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.017935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.018001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.018303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.018368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.018680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.018709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.018908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.018943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.019166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.019200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.019386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.019415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.019609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.019644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.019824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.019896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.020192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.020459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.020523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.020735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.570 [2024-12-09 10:49:38.020773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.570 qpair failed and we were unable to recover it. 00:38:53.570 [2024-12-09 10:49:38.020939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.020968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.021120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.021153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.021356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.021389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.021564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.021592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.021777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.021813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.022055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.022089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.022297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.022491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.022526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.022740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.022775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.022951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.022980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.023176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.023241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.023567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.023631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.023970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.024153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.024187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.024429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.024463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.024604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.024632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.024788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.024823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.025146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.025437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.025466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.025618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.025682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.026018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.026052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.026289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.026318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.026506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.026737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.026956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.026985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.027294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.027565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.027629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.027843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.027873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.028052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.028266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.028503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.028661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.028836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.028975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.029004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.029199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.029263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.029606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.029863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.029893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.571 qpair failed and we were unable to recover it. 00:38:53.571 [2024-12-09 10:49:38.030000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.571 [2024-12-09 10:49:38.030035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.030283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.030316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.030520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.030549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.030744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.030799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.031008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.031071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.031370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.031401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.031560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.031603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.031831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.032116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.032145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.032367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.032438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.032643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.032707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.032983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.033011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.033210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.033275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.033746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.033775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.033969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.034034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.034348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.034419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.034640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.034669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.034930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.035243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.035307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.035558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.035586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.035745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.035780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.036066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.036131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.036388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.036416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.036543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.036577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.036718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.036775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.036903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.036932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.037098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.037131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.037274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.037308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.037503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.037675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.037708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.037896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.037930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.038118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.038147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.038381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.038415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.038564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.038597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.038790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.038820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.039017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.039087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.039344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.039409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.039632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.572 [2024-12-09 10:49:38.039661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.572 qpair failed and we were unable to recover it. 00:38:53.572 [2024-12-09 10:49:38.039899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.039965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.040258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.040322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.040640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.040669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.040890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.040956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.041243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.041307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.041564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.041741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.041776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.042902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.042939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.043157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.043223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.043513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.043542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.043783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.044041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.044103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.044417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.044445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.044642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.044706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.045038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.045101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.045392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.045421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.045606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.045690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.046036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.046100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.046368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.046397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.046569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.046603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.046877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.046944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.047243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.047272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.047547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.047582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.047864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.047930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.048180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.048209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.048434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.048569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.048610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.048823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.048855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.049003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.049066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.049332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.049388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.049589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.049618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.049791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.049857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.050158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.050222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.050522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.050551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.573 qpair failed and we were unable to recover it. 00:38:53.573 [2024-12-09 10:49:38.050786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.573 [2024-12-09 10:49:38.050852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.051156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.051220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.051497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.051525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.051704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.051784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.052020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.052084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.052382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.052498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.052576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.052864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.053009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.053044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.053200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.053264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.053582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.053647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.053956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.053989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.054154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.054217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.054862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.055114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.055177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.055449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.055513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.055775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.056002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.056066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.056379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.056443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.056737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.056766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.057094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.057361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.057425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.057639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.057667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.057757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.057807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.058050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.058114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.058323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.058351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.058460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.058530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.058771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.058834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.059063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.059091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.059260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.059531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.059592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.059803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.059832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.059955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.060028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.060300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.060515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.060543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.060741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.060808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.061065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.061129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.061359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.061392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.574 [2024-12-09 10:49:38.061613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.574 [2024-12-09 10:49:38.061678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.574 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.061953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.062017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.062314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.062342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.062514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.062578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.062831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.062896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.063106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.063135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.063291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.063357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.063652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.063716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.063978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.064133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.064197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.064487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.064551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.064840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.064869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.064980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.065043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.065309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.065372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.065637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.065665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.065800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.065866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.066161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.066225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.066431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.066459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.066687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.066774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.066930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.066958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.067230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.067259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.067393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.067456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.067757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.067821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.068059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.068087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.068279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.068342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.068647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.068710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.068936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.068964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.069220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.069284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.069570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.069633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.069898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.069927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.070080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.070367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.070429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.575 qpair failed and we were unable to recover it. 00:38:53.575 [2024-12-09 10:49:38.070700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.575 [2024-12-09 10:49:38.070745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.070913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.070976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.071225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.071289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.071541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.071570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.071748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.071814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.072081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.072145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.072397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.072433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.072579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.072642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.072962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.073257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.073285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.073482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.073757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.073822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.074091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.074119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.074235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.074573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.074637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.074890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.074919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.075100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.075163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.075366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.075429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.075742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.075794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.076000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.076063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.076333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.076396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.076658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.076734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.076931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.076959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.077144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.077207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.077518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.077546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.077711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.077804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.077944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.077973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.078210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.078239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.078477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.078540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.078823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.078888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.079194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.079223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.079370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.079434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.079718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.079798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.080109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.080137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.080355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.080418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.080750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.080826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.081083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.081112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.081294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.081357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.576 qpair failed and we were unable to recover it. 00:38:53.576 [2024-12-09 10:49:38.081583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.576 [2024-12-09 10:49:38.081647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.081912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.081941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.082082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.082145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.082410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.082474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.082731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.082761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.082882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.082946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.083224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.083592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.083620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.083852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.083920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.084220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.084284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.084621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.084879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.084945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.085256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.085320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.085545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.085609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.085848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.085877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.086126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.086190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.086441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.086469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.086708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.086789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.087034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.087098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.087409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.087438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.087746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.087811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.087990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.088054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.088313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.088341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.088533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.088596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.088836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.088902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.089157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.089185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.089371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.089435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.089714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.089793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.090076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.090105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.090287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.090351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.090597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.090660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.090936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.090965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.091126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.091189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.091440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.091503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.091823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.092054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.092117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.577 [2024-12-09 10:49:38.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.577 [2024-12-09 10:49:38.092482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.577 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.092793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.092821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.092998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.093073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.093318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.093381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.093685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.093762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.093926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.093954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.094197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.094260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.094567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.094595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.094783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.094817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.095140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.095203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.095471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.095499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.095714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.095791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.096026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.096089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.096377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.096405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.096669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.096749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.097005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.097070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.097381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.097410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.097627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.097690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.097949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.098014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.098299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.098327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.098580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.098643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.098888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.098953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.099202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.099231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.099438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.099502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.099820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.099885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.100140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.100169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.100352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.100414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.100712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.100789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.100997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.101025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.101133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.101217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.101493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.101557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.101813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.101842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.102055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.102118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.102422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.102485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.102800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.102829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.103020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.103083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.103388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.103452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.103778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.103807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.103973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.104037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.578 [2024-12-09 10:49:38.104352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.578 [2024-12-09 10:49:38.104416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.578 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.104690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.104889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.104953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.105255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.105318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.105617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.105646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.105943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.106301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.106365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.106650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.106678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.106924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.106988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.107270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.107334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.107648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.107677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.107981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.108046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.108351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.108414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.108671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.108700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.108861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.108923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.109223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.109287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.109582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.109610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.109898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.109964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.110217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.110282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.110598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.110626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.110967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.111032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.111318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.111382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.111643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.111706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.111956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.112023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.112273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.112336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.112656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.112719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.112986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.113050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.113279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.113343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.113577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.113605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.113745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.113810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.114101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.114475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.579 [2024-12-09 10:49:38.114508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.579 qpair failed and we were unable to recover it. 00:38:53.579 [2024-12-09 10:49:38.114782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.115147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.115210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.115482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.115511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.115695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.115776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.116077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.116139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.116445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.116473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.116710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.116787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.117050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.117114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.117370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.117399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.117620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.117928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.117957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.118150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.118178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.118367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.118430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.118658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.118739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.119051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.119380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.119443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.119777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.120022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.120050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.120206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.120280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.120594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.120941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.120970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.121132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.121195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.121467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.121529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.121830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.122004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.122067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.122313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.122375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.122711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.122996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.123061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.123381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.123444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.123707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.123743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.123904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.123968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.124333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.124542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.124674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.124751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.125011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.125074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.125349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.125377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.125570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.125633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.125909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.125938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.126139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.580 [2024-12-09 10:49:38.126167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.580 qpair failed and we were unable to recover it. 00:38:53.580 [2024-12-09 10:49:38.126313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.126376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.126644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.126707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.127034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.127062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.127211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.127274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.127533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.127597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.127902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.127931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.128116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.128178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.128530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.128813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.129027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.129090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.129342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.129404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.129684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.129712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.129926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.129990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.130279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.130342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.130650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.130678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.131007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.131071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.131383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.131445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.131753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.131782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.132030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.132094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.132352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.132414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.132669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.132698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.132894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.132957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.133245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.133308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.133567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.133631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.133876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.133905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.134152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.134215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.134522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.134550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.134914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.135211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.135285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.135572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.135754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.135819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.135998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.136060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.136363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.136391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.136590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.136653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.136933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.136997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.137334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.581 [2024-12-09 10:49:38.137600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.581 [2024-12-09 10:49:38.137663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.581 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.137896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.138230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.138258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.138452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.138514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.138819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.138884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.139167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.139195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.139374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.139437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.139696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.139787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.140094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.140122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.140363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.140425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.140754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.140818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.141081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.141109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.141255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.141318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.141625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.141688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.141950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.141979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.142125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.142189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.142424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.142487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.142800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.142829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.142962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.143029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.143393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.143664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.143743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.143997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.144061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.144378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.144441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.144765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.144812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.144980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.145045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.145284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.145346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.145594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.145630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.145800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.145865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.146117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.146180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.146470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.146497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.146708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.146792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.146985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.147047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.147341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.147369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.147640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.147706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.148018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.148081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.148331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.148359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.148563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.582 [2024-12-09 10:49:38.148627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.582 qpair failed and we were unable to recover it. 00:38:53.582 [2024-12-09 10:49:38.148950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.148979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.149181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.149315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.149394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.149664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.149744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.150004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.150036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.150205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.150268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.150551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.150614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.150899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.150928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.151087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.151150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.151450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.151513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.151805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.151986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.152048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.152361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.152424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.152654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.152687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.152856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.152920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.153169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.153232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.153523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.153551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.153767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.153831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.154117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.154181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.154446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.154474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.154628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.154691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.154977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.155041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.155291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.155320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.155556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.155630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.155887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.155952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.156202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.156230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.156429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.156492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.156676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.156773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.156983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.157011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.157328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.157622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.157685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.157916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.157944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.158102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.158165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.158411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.158475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.158692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.158728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.583 [2024-12-09 10:49:38.158944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.583 qpair failed and we were unable to recover it. 00:38:53.583 [2024-12-09 10:49:38.159198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.159261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.159816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.159881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.160129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.160192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.160427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.160455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.160640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.160703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.161023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.161088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.161335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.161362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.161558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.161910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.161975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.162292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.162320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.162614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.162676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.162996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.163061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.163343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.163370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.163534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.163597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.163917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.163983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.164239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.164267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.164456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.164519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.164825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.164968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.164996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.165210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.165272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.165576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.165639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.165952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.165980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.166129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.166191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.166500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.166563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.166869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.166897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.167150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.167212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.167435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.167498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.167751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.167785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.167943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.168006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.168260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.168618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.168646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.168801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.168865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.169119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.584 [2024-12-09 10:49:38.169182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.584 qpair failed and we were unable to recover it. 00:38:53.584 [2024-12-09 10:49:38.169454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.169482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.169610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.169673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.169945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.169974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.170171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.170199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.170385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.170448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.170703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.170782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.171090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.171119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.171347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.171410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.171754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.172077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.172354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.172417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.172660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.172753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.173031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.173059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.173178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.173242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.173455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.173517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.173776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.173806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.174036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.174099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.174356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.174700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.174735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.174858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.174922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.175202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.175266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.175508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.175541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.175745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.175810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.176119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.176182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.176451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.176479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.176693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.177015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.177079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.177381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.177409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.177646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.177710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.177947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.177975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.178262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.178290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.178524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.178586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.178827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.178893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.179195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.179223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.179450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.179514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.179805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.179872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.180208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.180272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.180554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.180617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.180925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.585 [2024-12-09 10:49:38.180990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.585 qpair failed and we were unable to recover it. 00:38:53.585 [2024-12-09 10:49:38.181291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.181319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.181653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.181930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.181994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.182306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.182334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.182576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.182639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.182980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.183044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.183303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.183331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.183498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.183561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.183785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.183849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.184150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.184178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.184343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.184407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.184716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.184792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.185037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.185065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.185238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.185301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.185597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.185660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.185938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.185967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.186133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.186197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.186463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.186526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.186742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.186770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.186953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.187263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.187326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.187588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.187616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.187765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.187831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.188097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.188171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.188433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.188462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.188581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.188643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.188904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.188970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.189160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.189188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.189340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.189403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.189662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.189742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.189995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.190023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.190157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.190192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.190376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.190440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.190700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.190738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.190977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.191041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.191274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.191338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.191574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.191602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.586 [2024-12-09 10:49:38.191788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.586 [2024-12-09 10:49:38.191855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.586 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.192125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.192188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.192457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.192485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.192705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.192794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.193154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.193436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.193464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.193612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.193674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.193917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.193945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.194140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.194168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.194404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.194467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.194763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.194827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.195899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.195927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.196912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.196939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.197075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.197102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.197234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.197261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.197391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.197418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.197615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.197644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.197823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.198046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.198074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.198244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.198305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.198610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.198673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.198873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.198902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.199088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.199151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.199420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.199483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.865 [2024-12-09 10:49:38.199792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.865 [2024-12-09 10:49:38.199821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.865 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.200023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.200086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.200384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.200447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.200755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.200784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.201037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.201100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.201353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.201416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.201732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.201786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.202126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.202424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.202801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.202830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.203021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.203094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.203394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.203458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.203742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.203772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.203936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.203999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.204248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.204312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.204609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.204637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.204815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.205177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.205437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.205466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.205653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.205753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.205965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.206028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.206331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.206359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.206567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.206924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.206953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.207129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.207158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.207348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.207412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.207719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.207801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.208094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.208122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.208317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.208381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.208677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.208781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.209036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.209064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.209225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.209288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.209569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.209632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.209959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.209988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.210267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.210640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.210901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.211169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.211232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.211452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.211515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.866 qpair failed and we were unable to recover it. 00:38:53.866 [2024-12-09 10:49:38.211800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.866 [2024-12-09 10:49:38.211829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.212001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.212065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.212301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.212365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.212649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.212677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.212906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.212972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.213189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.213471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.213499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.213668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.213760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.214061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.214124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.214386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.214414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.214629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.214692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.215005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.215068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.215313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.215341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.215545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.215911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.215975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.216262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.216290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.216481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.216798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.216863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.217106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.217134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.217344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.217408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.217706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.217789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.218110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.218139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.218511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.218773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.218801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.219032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.219060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.219341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.219405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.219714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.219792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.220121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.220342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.220405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.220697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.220789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.221060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.221089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.221298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.221361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.221671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.221750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.222062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.222090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.222275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.222338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.222558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.222621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.222921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.222950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.223091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.223154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.223404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.867 [2024-12-09 10:49:38.223466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.867 qpair failed and we were unable to recover it. 00:38:53.867 [2024-12-09 10:49:38.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.223794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.224063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.224127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.224425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.224487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.224797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.224826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.225078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.225141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.225392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.225454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.225714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.225749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.225932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.226289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.226352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.226601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.226676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.227045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.227111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.227359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.227423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.227744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.227806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.227943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.227994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.228213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.228247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.228368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.228396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.228523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.228550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.228668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.228701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.228905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.228934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.229128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.229192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.229501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.229565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.229846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.229876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.230003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.230037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.230256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.230290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.230407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.230435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.230609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.230651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.230876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.230947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.231223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.231410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.231475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.231764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.231799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.231971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.232007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.232201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.232235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.232405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.232438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.232629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.232657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.232827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.232862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.233034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.233099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.233402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.233431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.233639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.233681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.234015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.234080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.868 qpair failed and we were unable to recover it. 00:38:53.868 [2024-12-09 10:49:38.234339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.868 [2024-12-09 10:49:38.234367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.234550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.234583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.234801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.234868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.235163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.235192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.235409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.235474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.235742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.235792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.236047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.236344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.236635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.236670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.237005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.237034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.237296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.237367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.237646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.238022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.238051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.238236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.238300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.238617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.238651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.238957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.238986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.239259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.239329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.239915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.239944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.240068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.240130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.240379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.240440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.240736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.240765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.240931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.240998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.241268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.241331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.241597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.241625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.241793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.241828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.242028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.242094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.242384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.242415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.242611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.242640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.242776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.242806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.243010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.243041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.243254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.243289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.243454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.869 [2024-12-09 10:49:38.243518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.869 qpair failed and we were unable to recover it. 00:38:53.869 [2024-12-09 10:49:38.243762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.243790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.243951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.244018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.244335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.244397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.244611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.244640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.244786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.244818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.244972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.245050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.245347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.245376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.245515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.245548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.245779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.245846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.246149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.246178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.246305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.246367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.246515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.246547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.246694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.246730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.246958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.247206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.247272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.247508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.247537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.247711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.247787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.247983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.248020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.248254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.248419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.248483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.248742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.248799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.249038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.249086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.249251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.249284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.249433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.249465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.249684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.249894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.250185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.250248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.250490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.250519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.250663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.250697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.250950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.251015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.251278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.251307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.251486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.251538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.251653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.251686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.251867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.251895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.252070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.252134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.252429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.252484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.252615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.252643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.252851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.252918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.253208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.253273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.870 [2024-12-09 10:49:38.253529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.870 [2024-12-09 10:49:38.253558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.870 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.253791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.253864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.254179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.254245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.254502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.254531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.254733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.254768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.255005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.255070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.255346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.255374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.255550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.255600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.255815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.255882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.256148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.256176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.256350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.256420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.256642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.256707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.256985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.257015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.257221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.257257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.257574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.257611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.257852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.258037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.258102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.258370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.258435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.258701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.258743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.258937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.259002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.259314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.259378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.259696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.259758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.260020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.260085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.260342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.260376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.260628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.260943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.261003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.261192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.261265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.261488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.261517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.261660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.261752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.262054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.262118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.262417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.262447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.262784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.263084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.263148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.263489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.263737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.263827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.264138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.264202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.264503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.264685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.264718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.265074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.265139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.265417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.871 [2024-12-09 10:49:38.265446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.871 qpair failed and we were unable to recover it. 00:38:53.871 [2024-12-09 10:49:38.265652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.265686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.266024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.266089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.266406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.266434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.266639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.266673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.267050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.267330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.267360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.267509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.267545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.267717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.267792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.267999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.268125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.268194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.268402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.268436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.268582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.268611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.268741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.268792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.268931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.268985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.269268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.269296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.269553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.269950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.270016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.270349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.270520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.270899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.270965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.271275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.271304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.271472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.271543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.271796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.271831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.272018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.272258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.272509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.272572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.272813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.272843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.273091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.273392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.273426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.273621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.273650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.273904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.273969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.274237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.274310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.274523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.274554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.274690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.274768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.274979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.275013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.275166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.275199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.275382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.275417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.275554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.275617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.872 [2024-12-09 10:49:38.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.872 [2024-12-09 10:49:38.275899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.872 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.276145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.276340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.276424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.276694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.276738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.276924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.276986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.277243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.277641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.277820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.277850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.278100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.278176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.278408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.278437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.278612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.278677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.279019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.279084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.279368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.279397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.279541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.279803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.279838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.280049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.280080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.280263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.280298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.280458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.280523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.280833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.280864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.281021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.281058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.281247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.281281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.281508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.281535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.281660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.281693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.281847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.281882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.282101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.282136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.282257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.282572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.282638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.282951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.282980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.283167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.283201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.283434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.283468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.283654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.283682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.283797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.283831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.284004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.284073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.284382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.284410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.284732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.284768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.284959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.284997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.285173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.285200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.285421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.285454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.285655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.285717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.285946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.873 [2024-12-09 10:49:38.285975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.873 qpair failed and we were unable to recover it. 00:38:53.873 [2024-12-09 10:49:38.286211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.286275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.286581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.286615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.286761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.286972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.287036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.287336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.287406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.287644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.287673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.287885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.287959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.288147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.288183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.288367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.288396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.288594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.288659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.288971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.289128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.289156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.289351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.289385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.289527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.289559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.289740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.289769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.289947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.290020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.290287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.290351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.290582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.290611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.290844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.290879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.291126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.291160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.291335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.291367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.291608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.291642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.291909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.291976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.292230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.292259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.292414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.292483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.292768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.292809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.292933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.292963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.293150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.293182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.293374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.293642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.874 qpair failed and we were unable to recover it. 00:38:53.874 [2024-12-09 10:49:38.293847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.874 [2024-12-09 10:49:38.293913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.294165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.294227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.294506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.294536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.294681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.295020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.295085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.295315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.295343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.295529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.295820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.295887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.296185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.296213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.296420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.296485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.296817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.296883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.297174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.297202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.297404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.297467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.297694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.297735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.297885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.297914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.298097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.298129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.298266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.298299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.298454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.298483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.298659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.298693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.298911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.299211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.299259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.299437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.299610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.299648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.299808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.300005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.300075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.300397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.300461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.300755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.300785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.300966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.300999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.301143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.301177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.301304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.301331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.301533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.301567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.301683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.301717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.301888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.301916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.302073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.302137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.302419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.875 qpair failed and we were unable to recover it. 00:38:53.875 [2024-12-09 10:49:38.302782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.875 [2024-12-09 10:49:38.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.303012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.303078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.303317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.303380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.303684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.303762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.303963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.304120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.304343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.304544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.304687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.304933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.304962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.305118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.305182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.305477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.305539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.305782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.305814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.305960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.305995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.306140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.306173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.306319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.306347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.306516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.306552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.306705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.306751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.306873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.307031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.307333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.307397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.307674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.307702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.307910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.307978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.308282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.308315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.308459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.308490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.308738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.308802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.309043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.309106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.309428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.309459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.309711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.309830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.310201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.310474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.310505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.310710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.876 [2024-12-09 10:49:38.311111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.876 [2024-12-09 10:49:38.311174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.876 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.311435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.311464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.311649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.311684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.312038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.312102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.312356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.312540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.312573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.312782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.312847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.313163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.313446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.313480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.313602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.313636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.313901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.313931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.314045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.314109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.314384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.314438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.314676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.314704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.314948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.314982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.315207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.315287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.315514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.315543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.315779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.315814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.315956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.315989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.316228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.316256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.316493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.316556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.316858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.316924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.317258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.317552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.317866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.317896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.318028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.318056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.318217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.318281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.318533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.318595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.318855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.318884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.319107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.319171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.319431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.319494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.319804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.319834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.320040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.320104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.320352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.320416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.320707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.320745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.321011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.321075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.321336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.877 [2024-12-09 10:49:38.321400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.877 qpair failed and we were unable to recover it. 00:38:53.877 [2024-12-09 10:49:38.321705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.321742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.322015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.322078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.322330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.322394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.322608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.322636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.322822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.322888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.323203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.323266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.323565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.323593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.323789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.323856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.324102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.324165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.324398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.324605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.324668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.324940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.325005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.325319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.325347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.325619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.325681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.326000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.326065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.326334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.326362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.326567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.326891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.326957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.327232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.327260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.327442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.327506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.327809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.327875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.328179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.328207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.328434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.328498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.328766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.328830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.329099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.329127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.329306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.329369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.329660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.329755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.330066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.330468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.330753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.330833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.331164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.331192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.331499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.331563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.331850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.331915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.332195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.878 [2024-12-09 10:49:38.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.878 qpair failed and we were unable to recover it. 00:38:53.878 [2024-12-09 10:49:38.332386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.332449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.332713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.332792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.333089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.333117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.333307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.333371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.333584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.333646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.333871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.333900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.334125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.334188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.334453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.334518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.334772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.334802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.334944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.334998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.335297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.335361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.335623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.335687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.335994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.336058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.336331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.336394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.336704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.336796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.336914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.336942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.337218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.337591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.337619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.337847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.337913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.338123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.338186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.338443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.338471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.338685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.339017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.339081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.339392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.339420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.339654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.339718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.340054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.879 [2024-12-09 10:49:38.340118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.879 qpair failed and we were unable to recover it. 00:38:53.879 [2024-12-09 10:49:38.340417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.340445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.340600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.340664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.340952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.341017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.341325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.341353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.341633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.342026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.342090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.342385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.342674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.342756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.343156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.343462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.343490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.343758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.343824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.344133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.344196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.344447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.344475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.344719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.345037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.345099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.345342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.345370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.345571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.345634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.345950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.346015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.346319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.346347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.346610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.346672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.346958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.347021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.347316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.347344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.347533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.347596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.347898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.347963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.348218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.348486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.348549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.348839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.348868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.348973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.349002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.349160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.349223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.349476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.349539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.349861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.349890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.350142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.350206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.350498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.350560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.350856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.350885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.351130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.880 [2024-12-09 10:49:38.351193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.880 qpair failed and we were unable to recover it. 00:38:53.880 [2024-12-09 10:49:38.351479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.351552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.351840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.351869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.352103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.352167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.352399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.352461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.352726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.352755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.352918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.352982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.353274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.353337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.353623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.353651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.353869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.353934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.354246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.354309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.354600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.354628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.354754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.354818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.355087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.355150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.355431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.355459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.355629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.355692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.356068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.356131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.356381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.356546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.356609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.356895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.356924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.357082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.357110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.357294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.357357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.357626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.357689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.357952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.357981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.358195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.358420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.358690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.358718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.358943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.359006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.359310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.359373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.359628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.359656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.359818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.359891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.881 [2024-12-09 10:49:38.360146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.881 [2024-12-09 10:49:38.360209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.881 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.360459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.360487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.360673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.360767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.361082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.361145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.361416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.361444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.361659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.361737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.362009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.362072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.362329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.362358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.362507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.362862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.362927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.363233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.363261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.363473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.363547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.363785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.363850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.364070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.364098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.364283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.364346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.364638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.364701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.364961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.364989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.365185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.365248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.365551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.365614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.365931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.365960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.366208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.366272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.366566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.366629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.366939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.366968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.367131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.367194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.367471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.367699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.367735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.367967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.368030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.368311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.368376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.368678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.368706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.368924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.368987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.369348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.369563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.369591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.369766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.369829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.370124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.370187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.370438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.370466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.370605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.882 [2024-12-09 10:49:38.370668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.882 qpair failed and we were unable to recover it. 00:38:53.882 [2024-12-09 10:49:38.370906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.370970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.371216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.371244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.371387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.371461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.371746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.371811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.372145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.372342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.372404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.372698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.372786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.373001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.373318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.373381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.373655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.373718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.374052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.374080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.374335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.374398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.374676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.374756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.375059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.375087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.375277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.375340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.375583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.375646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.375950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.375978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.376096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.376159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.376449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.376511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.376813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.376842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.377086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.377148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.377460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.377820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.377849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.378064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.378127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.378322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.378384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.378629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.378657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.378859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.378924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.379174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.379237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.379513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.379541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.379666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.379744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.380064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.883 [2024-12-09 10:49:38.380127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.883 qpair failed and we were unable to recover it. 00:38:53.883 [2024-12-09 10:49:38.380424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.380452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.380611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.380675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.381019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.381082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.381346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.381409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.381669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.381763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.381928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.381956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.382207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.382270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.382659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.382969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.382998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.383200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.383264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.383564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.383627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.383899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.383964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.384265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.384298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.384603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.384667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.384981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.385045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.385277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.385340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.385634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.385662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.385813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.385889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.386135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.386199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.386567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.386782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.386821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.386955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.387018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.387319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.387382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.387677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.387754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.388064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.388092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.388312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.388376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.884 [2024-12-09 10:49:38.388669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.884 [2024-12-09 10:49:38.388757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.884 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.389001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.389064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.389369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.389397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.389599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.389662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.389899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.389963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.390257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.390320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.390588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.390616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.390802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.390866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.391105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.391168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.391478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.391541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.391820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.391848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.392081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.392144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.392399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.392462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.392683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.392770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.393079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.393107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.393394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.393458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.393684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.393761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.394071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.394134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.394388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.394416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.394564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.394626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.394953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.395017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.395310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.395373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.395622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.395649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.395769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.396194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.396477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.396540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.396844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.396873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.397143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.397207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.397519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.397582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.397880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.397946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.398188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.398373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.398435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.398699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.398779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.399026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.399089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.399357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.399385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.885 [2024-12-09 10:49:38.399560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.885 [2024-12-09 10:49:38.399624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.885 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.399934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.399999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.400227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.400290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.400557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.400585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.400761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.400825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.401121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.401184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.401464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.401528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.401820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.401849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.402092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.402155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.402424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.402487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.402762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.402826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.403089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.403117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.403299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.403361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.403670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.403746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.403994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.404058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.404313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.404341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.404515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.404578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.404858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.404923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.405174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.405237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.405498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.405713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.405801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.406107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.406170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.406481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.406544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.406815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.406844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.407031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.407094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.407273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.407336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.407529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.407592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.407849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.407877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.408022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.408086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.408305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.408367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.408661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.408742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.408931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.408959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.409142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.409205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.409424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.409488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.409741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.409811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.410058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.410086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.410212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.410275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.886 qpair failed and we were unable to recover it. 00:38:53.886 [2024-12-09 10:49:38.410497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.886 [2024-12-09 10:49:38.410560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.410839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.410904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.411148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.411176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.411330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.411393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.411595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.411658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.411911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.411986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.412167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.412195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.412328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.412380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.412587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.412650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.412899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.412975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.413182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.413210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.413428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.413633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.413696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.413878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.413906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.414050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.414078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.414227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.414291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.414494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.414557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.414788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.414853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.415077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.415106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.415273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.415337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.415540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.415603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.415837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.415902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.416116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.416144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.416305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.416370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.416608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.416671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.416767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf08570 (9): Bad file descriptor 00:38:53.887 [2024-12-09 10:49:38.417091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.417136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.417251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.417281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.417439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.417495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.417685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.417748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.417894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.417923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.418051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.418080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.418244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.418310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.418517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.418580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.418803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.418833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.418933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.418960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.419117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.419258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.419286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.887 [2024-12-09 10:49:38.419524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.887 qpair failed and we were unable to recover it. 00:38:53.887 [2024-12-09 10:49:38.419620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.419649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.419749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.419780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.419964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.420153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.420552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.420744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.420870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.420897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.421024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.421087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.421319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.421382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.421610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.421674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.421873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.421904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.422097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.422149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.422270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.422468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.422514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.422684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.422713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.422883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.423152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.423376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.423530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.423660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.423848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.423981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.424008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.424137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.424199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.424432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.424495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.424706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.424781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.424911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.424939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.425114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.425177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.425348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.425411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.425599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.425627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.425783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.425811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.425988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.426049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.426269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.426332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.426526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.426589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.426802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.426832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.426934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.426961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.427133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.427196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.427428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.427489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.427684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.888 [2024-12-09 10:49:38.427711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.888 qpair failed and we were unable to recover it. 00:38:53.888 [2024-12-09 10:49:38.427877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.427904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.428061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.428124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.428373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.428437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.428778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.428940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.429004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.429239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.429268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.429417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.429482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.429702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.429781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.429946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.429974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.430096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.430158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.430358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.430678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.430759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.430940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.430991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.431218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.431292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.431522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.431585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.431793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.431822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.431953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.432013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.432210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.432238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.432410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.432471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.432697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.432780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.432910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.433085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.433148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.433375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.433437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.433608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.433676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.433891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.433920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.434081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.434143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.434428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.434660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.434739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.434936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.434965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.435207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.435235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.435335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.435411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.435613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.435676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.435873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.435901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.889 [2024-12-09 10:49:38.436049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.889 [2024-12-09 10:49:38.436110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.889 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.436310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.436371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.436602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.436664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.436906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.437112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.437303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.437504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.437567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.437774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.437811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.437939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.437967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.438132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.438193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.438367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.438429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.438738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.438911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.438938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.439188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.439395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.439456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.439651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.439712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.439927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.440051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.440079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.440221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.440282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.440509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.440572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.440798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.440828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.440978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.441041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.441245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.441306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.441531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.441592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.441835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.441964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.442041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.442268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.442296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.442435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.442498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.442741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.442788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.442916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.442945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.443095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.443156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.443383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.443447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.443685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.443772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.443872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.443899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.444049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.444111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.444345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.444407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.444660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.444912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.445038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.445065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.445211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.890 [2024-12-09 10:49:38.445274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.890 qpair failed and we were unable to recover it. 00:38:53.890 [2024-12-09 10:49:38.445478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.445542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.445703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.445742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.445900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.445971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.446144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.446206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.446382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.446410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.446530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.446558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.446755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.446821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.446988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.447016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.447114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.447148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.447280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.447342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.447550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.447578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.447707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.447792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.448024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.448086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.448318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.448346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.448517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.448580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.448811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.448875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.449050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.449078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.449261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.449324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.449523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.449587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.449790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.449819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.450000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.450064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.450258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.450321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.450591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.450655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.450898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.450926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.451225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.451288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.451521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.451549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.451769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.451797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.451927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.451954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.452198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.452227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.452465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.452528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.452781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.452846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.453099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.453127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.453323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.453386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.453695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.453797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.454064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.454092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.454324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.454397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.454675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.454751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.454997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.891 [2024-12-09 10:49:38.455026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.891 qpair failed and we were unable to recover it. 00:38:53.891 [2024-12-09 10:49:38.455216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.455278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.455502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.455564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.455825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.455853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.456032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.456093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.456381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.456452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.456750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.456779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.456930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.457215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.457278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.457530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.457558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.457788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.458015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.458078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.458317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.458346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.458571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.458633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.458854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.458918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.459217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.459246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.459446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.459509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.459781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.459846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.460054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.460083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.460181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.460229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.460438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.460501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.460749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.460779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.460932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.460995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.461221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.461284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.461559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.461589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.461785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.461813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.461944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.462006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.462251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.462280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.462396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.462463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.462733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.462798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.463028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.463056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.463232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.463542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.463605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.463807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.463836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.464008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.464072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.464299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.464362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.464596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.464624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.464876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.465113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.465176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.465476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.892 [2024-12-09 10:49:38.465519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.892 qpair failed and we were unable to recover it. 00:38:53.892 [2024-12-09 10:49:38.465739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.466014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.466079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.466369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.466397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.466592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.466655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.466913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.466989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.467290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.467318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.467515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.467578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.467794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.467859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.468085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.468113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.468285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.468348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.468581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.468646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.468873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.468902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.469167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.469431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.469494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.469831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.469933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.469980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.470229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.470292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.470616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.470663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.470872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.470901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.471094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.471157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.471457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.471485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.471733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.471798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.471997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.472059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.472296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.472324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.472489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.472552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.472791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.472856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.473143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.473171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.473359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.473424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.473738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.473803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.474095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.474123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.474385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.474679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.474759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.475007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.475035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.475180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.475243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.475456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.475519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.893 [2024-12-09 10:49:38.475793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.893 [2024-12-09 10:49:38.475822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.893 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.475967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.476029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.476395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.476687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.476715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.476956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.477019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.477283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.477348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.477553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.477581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.477758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.477829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.477990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.478169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.478370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.478576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.478731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.478913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.479301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.479365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.479620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.479648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.479851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.479889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.480114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.480147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.480341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.480369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.480625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.480660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.480900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.481004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.481031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.481189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.481226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.481376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.481421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.481652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.481681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.481839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.481873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.482949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.483263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.483337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.483584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.483612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.483793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.483827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.483996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.894 [2024-12-09 10:49:38.484036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.894 qpair failed and we were unable to recover it. 00:38:53.894 [2024-12-09 10:49:38.484228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.484257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.484464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.484500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.484660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.484775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.484935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.484963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.485136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.485200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.485402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.485464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.485636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.485663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.485797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.485843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.485960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.485993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.486134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.486162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.486355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.486390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.486531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.486564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.486735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.486775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.486923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.486988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.487213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.487277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.487490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.487519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.487663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.487701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.487900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.487966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.488211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.488408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.488472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.488683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.488717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.488879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.488907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.489123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.489273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.489651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.489845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.489966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.490183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.490394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.490621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.490769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.490935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.490967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.491135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.491163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.491310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.491384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.491620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.491682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.491897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.895 [2024-12-09 10:49:38.491925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.895 qpair failed and we were unable to recover it. 00:38:53.895 [2024-12-09 10:49:38.492072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.492122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.492307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.492342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.492485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.492514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.492877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.492933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.493135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.493163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.493341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.493403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.493639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.493702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.493925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.493957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.494142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.494314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.494482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.494509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.494648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.494683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.494853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.494882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.495020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.495048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.495204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.495268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.495507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.495571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.495786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.495815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.495916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.495944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.496887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.496915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.497150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.497293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.497323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.497490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.497543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.497714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.497767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.497906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.497933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.498114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.498148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.498378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.498443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.498659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.498688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.498838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.498904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.499130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.499164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.499342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.499373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.499502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.499530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.896 [2024-12-09 10:49:38.499634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.896 [2024-12-09 10:49:38.499668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.896 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.499832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.499862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.500039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.500108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.500365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.500429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.500638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.500666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.500823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.500857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:53.897 [2024-12-09 10:49:38.500991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:53.897 [2024-12-09 10:49:38.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:53.897 qpair failed and we were unable to recover it. 00:38:54.177 [2024-12-09 10:49:38.501181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.177 [2024-12-09 10:49:38.501210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.177 qpair failed and we were unable to recover it. 00:38:54.177 [2024-12-09 10:49:38.501363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.177 [2024-12-09 10:49:38.501396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.177 qpair failed and we were unable to recover it. 00:38:54.177 [2024-12-09 10:49:38.501537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.177 [2024-12-09 10:49:38.501570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.177 qpair failed and we were unable to recover it. 00:38:54.177 [2024-12-09 10:49:38.501715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.177 [2024-12-09 10:49:38.501774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.501881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.501909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.502038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.502100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.502299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.502327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.502477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.502556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.502665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.502701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.502838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.502866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.503921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.503950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.504930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.504959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.505882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.505973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.506154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.506311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.506493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.506652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.506839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.506874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.507043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.507076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.507242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.507270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.507429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.507463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.507662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.507748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.507910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.507939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.508055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.508083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.508264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.178 [2024-12-09 10:49:38.508328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.178 qpair failed and we were unable to recover it. 00:38:54.178 [2024-12-09 10:49:38.508573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.508602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.508780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.508848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.509061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.509124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.509324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.509510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.509578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.509797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.509835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.509979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.510008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.510155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.510188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.510367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.510433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.510657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.510748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.510868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.510896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.511955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.511989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.512222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.512286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.512509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.512538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.512706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.512800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.512970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.513142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.513264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.513460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.513608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.513790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.513841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.514046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.514111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.514346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.514374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.514543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.514607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.514812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.514851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.514972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.515134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.515355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.515523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.515651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.515903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.515969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.516175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.516203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.516423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.516649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.179 [2024-12-09 10:49:38.516683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.179 qpair failed and we were unable to recover it. 00:38:54.179 [2024-12-09 10:49:38.516839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.516868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.516998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.517210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.517403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.517575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.517736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.517885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.517914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.518994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.519132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.519165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.519376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.519439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.519663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.519691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.519875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.519937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.520135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.520168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.520362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.520499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.520533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.520699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.520907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.520937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.521109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.521172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.521408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.521695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.521730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.521865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.521900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.522112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.522284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.522616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.522789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.522918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.523004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.523212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.523276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.523511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.523539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.523635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.523683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.523849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.523884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.524034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.524063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.524204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.524238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.524374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.524412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.180 [2024-12-09 10:49:38.524587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.180 [2024-12-09 10:49:38.524615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.180 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.524761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.524831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.525047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.525111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.525322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.525350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.525500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.525569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.525740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.525773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.525889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.525920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.526053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.526082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.526197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.526229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.526376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.526408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.526533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.526581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.526755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.526826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.527031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.527059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.527276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.527477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.527545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.527755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.527784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.527926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.527989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.528187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.528248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.528445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.528472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.528645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.528699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.528861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.529014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.529043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.529198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.529245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.529364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.529398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.529553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.529581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.529768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.529842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.530099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.530163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.530427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.530459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.530579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.530612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.530755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.530822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.531048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.531076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.531234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.531295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.531498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.531559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.531732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.531791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.531951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.532150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.532360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.532560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.532794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.532924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.181 [2024-12-09 10:49:38.532952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.181 qpair failed and we were unable to recover it. 00:38:54.181 [2024-12-09 10:49:38.533091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.533145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.533293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.533328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.533451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.533482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.533659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.533891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.533921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.534049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.534091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.534219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.534290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.534526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.534590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.534814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.534843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.534986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.535169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.535386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.535539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.535688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.535895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.535924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.536078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.536142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.536362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.536427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.536637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.536666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.536829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.536864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.537886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.537996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.540878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.540931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.541163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.541429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.541510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.541866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.542047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.542076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.542251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.542285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.542485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.542709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.182 [2024-12-09 10:49:38.542752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.182 qpair failed and we were unable to recover it. 00:38:54.182 [2024-12-09 10:49:38.542904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.542932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.543140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.543203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.543405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.543433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.543587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.543651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.543852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.543881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.544000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.544174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.544208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.544380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.544590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.544619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.544812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.544841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.545022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.545364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.545392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.545630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.545694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.545951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.545985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.546152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.546181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.546344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.546377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.546527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.546561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.546775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.546805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.546988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.547052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.547274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.547338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.547513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.547541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.547759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.547967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.548161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.548334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.548518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.548740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.548919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.548984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.549151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.549215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.549496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.549526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.549685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.549719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.549929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.549995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.550306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.550335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.550575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.550608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.550784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.550819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.550993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.551022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.551130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.551177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.551344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.551377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.551558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.183 [2024-12-09 10:49:38.551592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.183 qpair failed and we were unable to recover it. 00:38:54.183 [2024-12-09 10:49:38.551821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.551851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.552003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.552200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.552229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.552396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.552429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.552705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.552907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.552936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.553064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.553119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.553422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.553487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.553688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.553716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.553902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.553943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.554198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.554236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.554362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.554576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.554611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.554811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.554846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.554997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.555025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.555238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.555447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.555517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.555703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.555908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.555993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.556149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.556183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.556335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.556370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.556512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.556545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.556741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.556777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.556947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.556976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.557175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.557240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.557551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.557613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.557805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.557837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.558882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.558910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.559068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.559133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.559395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.559459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.559763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.559792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.559957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.559996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.560157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.560191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.560409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.560437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.184 qpair failed and we were unable to recover it. 00:38:54.184 [2024-12-09 10:49:38.560633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.184 [2024-12-09 10:49:38.560670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.560829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.560860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.561046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.561075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.561248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.561318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.561559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.561593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.561758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.561787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.561893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.561938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.562129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.562378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.562411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.562579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.562613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.562800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.562834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.563000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.563038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.563281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.563315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.563528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.563732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.563761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.563896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.563929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.564215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.564276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.564574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.564603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.564899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.565266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.565536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.565564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.565683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.565760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.565994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.566058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.566320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.566348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.566507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.566570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.566869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.566903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.567101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.567276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.567339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.567637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.567700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.567966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.567995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.568151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.568214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.568463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.568752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.568781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.568988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.569051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.569297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.569360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.569570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.569634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.569861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.569890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.570036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.570099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.570332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.570366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.570605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.185 [2024-12-09 10:49:38.570668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.185 qpair failed and we were unable to recover it. 00:38:54.185 [2024-12-09 10:49:38.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.570967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.571241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.571269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.571417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.571487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.571743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.571808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.572106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.572135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.572424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.572487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.572697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.572777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.573010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.573038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.573229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.573292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.573537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.573600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.573864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.573892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.574075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.574437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.574500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.574765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.574794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.574943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.575006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.575318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.575650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.575678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.575847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.575912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.576211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.576274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.576484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.576763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.576828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.577004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.577067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.577327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.577355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.577552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.577615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.577839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.577867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.578014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.578047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.578180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.578243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.578506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.578567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.578814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.578843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.579044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.579333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.579397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.579662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.579690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.579873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.579938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.580258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.186 qpair failed and we were unable to recover it. 00:38:54.186 [2024-12-09 10:49:38.580529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.186 [2024-12-09 10:49:38.580556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.580737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.580802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.581009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.581072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.581376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.581404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.581524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.581587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.581857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.581924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.582203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.582231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.582452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.582719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.582985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.583013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.583220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.583283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.583564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.583626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.583904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.583933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.584106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.584169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.584428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.584491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.584794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.584823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.585014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.585269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.585332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.585638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.585701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.585881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.585909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.586091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.586154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.586402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.586431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.586553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.586616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.586842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.586908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.587161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.587189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.587387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.587449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.587680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.587756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.587998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.588178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.588240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.588534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.588737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.588766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.588913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.588973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.589287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.589361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.589640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.589933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.589992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.590257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.590319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.590630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.590693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.590888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.590916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.591128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.591191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.187 [2024-12-09 10:49:38.591465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.187 [2024-12-09 10:49:38.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.187 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.591663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.591759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.591916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.591944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.592185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.592213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.592427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.592490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.592789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.592856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.593164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.593192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.593428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.593492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.593753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.593819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.594090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.594118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.594245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.594308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.594582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.594645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.594954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.594983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.595144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.595208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.595505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.595568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.595819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.596041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.596105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.596399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.596462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.596760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.596790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.597040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.597103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.597402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.597465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.597782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.597811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.598101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.598166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.598774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.598914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.599247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.599310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.599639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.599701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.599950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.600013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.600308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.600371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.600693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.600791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.600992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.601056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.601324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.601387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.601557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.601629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.601940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.601970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.602239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.602303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.602558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.602586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.602785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.602851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.603098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.603162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.603415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.188 [2024-12-09 10:49:38.603444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.188 qpair failed and we were unable to recover it. 00:38:54.188 [2024-12-09 10:49:38.603620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.603685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.603994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.604057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.604303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.604545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.604791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.604856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.605097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.605125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.605298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.605360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.605605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.605668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.605929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.605957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.606128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.606191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.606440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.606504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.606808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.606837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.607073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.607136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.607349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.607412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.607795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.608087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.608150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.608444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.608507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.608812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.608841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.609096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.609159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.609474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.609805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.609854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.610184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.610408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.610472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.610718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.610756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.610976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.611268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.611331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.611627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.611655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.611871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.611937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.612247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.612311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.612553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.612589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.612743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.612816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.613074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.613138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.613436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.613464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.613633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.613696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.614063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.614318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.614346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.614521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.614584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.614852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.614918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.615170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.615198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.189 [2024-12-09 10:49:38.615382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.189 [2024-12-09 10:49:38.615445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.189 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.615755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.615821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.616075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.616103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.616346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.616408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.616652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.616716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.617040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.617068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.617317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.617380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.617702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.617783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.618089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.618117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.618336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.618399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.618718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.618799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.619096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.619124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.619323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.619386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.619667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.619763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.620067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.620095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.620309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.620373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.620642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.620964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.620993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.621216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.621279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.621595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.621658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.621926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.621954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.622100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.622164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.622465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.622825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.622859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.623005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.623069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.623386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.623449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.623746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.623797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.623995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.624059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.624278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.624340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.624607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.624670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.624913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.624942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.625233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.625296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.625607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.625635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.625850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.625916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.626218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.626281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.626618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.626887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.626953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.627214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.627277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.627585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.627613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.190 [2024-12-09 10:49:38.627902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.190 qpair failed and we were unable to recover it. 00:38:54.190 [2024-12-09 10:49:38.628110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.628174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.628471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.628499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.628703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.628790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.628974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.629035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.629270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.629298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.629522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.629585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.629831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.629896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.630185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.630213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.630365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.630428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.630744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.630808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.631061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.631094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.631240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.631304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.631557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.631621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.631892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.631921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.632096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.632159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.632470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.632533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.632829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.632858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.633118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.633182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.633455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.633518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.633762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.633791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.634026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.634090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.634336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.634399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.634670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.634698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.634866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.634930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.635241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.635274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.635463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.635491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.635780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.636084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.636147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.636420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.636449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.636596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.636659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.636944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.637006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.637294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.637321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.637476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.637537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.637757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.637824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.191 [2024-12-09 10:49:38.638071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.191 [2024-12-09 10:49:38.638101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.191 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.638306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.638370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.638665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.638743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.639062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.639091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.639351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.639416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.639739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.639791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.639980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.640009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.640207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.640271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.640519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.640584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.640830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.640869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.641024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.641099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.641365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.641429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.641747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.641981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.642046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.642307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.642372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.642667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.642696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.642971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.643037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.643342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.643417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.643683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.643713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.643870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.643935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.644172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.644238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.644538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.644752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.644819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.645090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.645417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.645446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.645597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.645672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.646005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.646071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.646351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.646381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.646590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.646654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.646951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.647017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.647247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.647446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.647521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.647820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.647850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.648064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.648094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.648334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.648398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.648636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.648700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.648986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.649017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.649228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.649293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.649565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.649629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.192 [2024-12-09 10:49:38.649940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.192 [2024-12-09 10:49:38.649971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.192 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.650222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.650286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.650553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.650619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.650916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.650947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.651151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.651212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.651479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.651555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.651836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.651866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.652080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.652145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.652400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.652775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.652804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.653187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.653499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.653565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.653815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.653844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.654000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.654071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.654373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.654438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.654712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.654941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.655005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.655579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.655644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.655941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.656007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.656259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.656323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.656653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.656717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.656985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.657050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.657329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.657394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.657661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.657754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.658033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.658098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.658406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.658470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.658752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.658782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.658918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.658984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.659223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.659289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.659576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.659605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.659841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.659912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.660177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.660242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.660544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.660574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.660877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.660943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.661185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.661250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.661509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.661538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.661685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.661763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.661970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.662033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.662330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.193 [2024-12-09 10:49:38.662360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.193 qpair failed and we were unable to recover it. 00:38:54.193 [2024-12-09 10:49:38.662559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.662628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.662943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.663011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.663284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.663313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.663426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.663490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.663782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.663849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.664128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.664157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.664347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.664422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.664745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.664811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.665108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.665137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.665308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.665370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.665591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.665655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.665979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.666009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.666259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.666323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.666591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.666655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.666870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.666911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.667069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.667141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.667416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.667480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.667755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.667785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.667964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.668029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.668268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.668333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.668616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.668646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.668842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.668908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.669155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.669220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.669484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.669513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.669830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.670073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.670139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.670428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.670458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.670693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.670773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.671055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.671120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.671400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.671429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.671579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.671643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.671944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.672011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.672321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.672351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.672604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.672682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.673003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.673067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.673362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.673392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.673639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.673703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.674061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.674127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.674396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.194 [2024-12-09 10:49:38.674426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.194 qpair failed and we were unable to recover it. 00:38:54.194 [2024-12-09 10:49:38.674595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.674671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.675010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.675379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.675408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.675703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.675787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.676050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.676114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.676376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.676406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.676527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.676591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.676831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.676898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.677208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.677238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.677483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.677548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.677778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.677844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.678043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.678073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.678278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.678342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.678588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.678652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.678878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.678908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.679112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.679176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.679463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.679527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.679832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.679862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.680055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.680120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.680393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.680456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.680712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.680805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.681019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.681085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.681403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.681467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.681787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.681818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.682056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.682118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.682417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.682481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.682770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.682800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.683060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.683369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.683433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.683731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.683762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.684042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.684272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.684336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.684615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.684645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.684803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.684868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.685174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.685239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.195 qpair failed and we were unable to recover it. 00:38:54.195 [2024-12-09 10:49:38.685506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.195 [2024-12-09 10:49:38.685540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.685743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.685810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.686123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.686335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.686363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.686589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.686653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.686954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.687020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.687329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.687359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.687613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.687678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.687935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.688000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.688304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.688333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.688481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.688556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.688758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.688825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.689134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.689164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.689483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.689548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.689869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.689935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.690223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.690253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.690420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.690488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.690752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.690819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.691068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.691096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.691271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.691336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.691611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.691675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.691961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.691991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.692113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.692177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.692422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.692486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.692696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.692743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.693012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.693337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.693401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.693748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.694083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.694149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.694395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.694459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.694675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.694704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.694861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.694936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.695216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.695281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.695564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.695594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.695792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.695822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.695993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.696071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.696325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.696490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.696566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.696873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.696941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.196 [2024-12-09 10:49:38.697252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.196 [2024-12-09 10:49:38.697281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.196 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.697513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.697588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.697822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.697889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.698098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.698127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.698233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.698303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.698550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.698615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.698850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.698880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.698990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.699061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.699372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.699672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.699756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.700007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.700072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.700298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.700363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.700632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.700697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.700965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.701030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.701233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.701261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.701364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.701423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.701643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.701708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.701974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.702004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.702185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.702250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.702451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.702514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.702699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.702891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.702957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.703164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.703228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.703482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.703511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.703742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.703809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.704032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.704096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.704332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.704361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.704544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.704785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.704814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.704958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.705121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.705186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.705395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.705459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.705666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.705695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.705843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.705908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.706189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.706429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.706458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.706619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.706887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.706951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.707153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.707182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.197 [2024-12-09 10:49:38.707315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.197 [2024-12-09 10:49:38.707383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.197 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.707621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.707685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.707876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.707904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.708083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.708148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.708433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.708642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.708670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.708817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.708881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.709123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.709184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.709364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.709392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.709525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.709593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.709816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.709883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.710089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.710117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.710277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.710340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.710547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.710610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.710848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.710877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.711049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.711113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.711320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.711384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.711607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.711670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.711901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.711930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.712118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.712183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.712396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.712425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.712555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.712811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.712840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.712951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.712979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.713175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.713252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.713537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.713601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.713830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.713895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.714137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.714202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.714417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.714481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.714770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.714836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.715056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.715123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.715343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.715409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.715708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.715799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.716057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.716086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.716205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.716277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.716529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.716594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.716835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.716864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.717045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.717110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.717353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.717417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.198 qpair failed and we were unable to recover it. 00:38:54.198 [2024-12-09 10:49:38.717695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.198 [2024-12-09 10:49:38.717742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.717924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.717988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.718215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.718279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.718492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.718532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.718713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.719010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.719075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.719303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.719343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.719542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.719606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.719815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.719881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.720179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.720315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.720378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.720586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.720650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.720875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.720905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.721085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.721149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.721419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.721485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.721748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.721807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.721940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.721969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.722212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.722570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.722634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.722860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.722896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.723050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.723318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.723383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.723658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.723786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.723932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.723963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.724096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.724162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.724392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.724457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.724710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.724749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.724895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.724960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.725209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.725273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.725512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.725552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.725707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.725790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.726010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.726085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.726394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.726423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.726691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.726784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.726976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.727040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.727341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.727381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.727620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.727911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.727975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.728201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.728231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.199 qpair failed and we were unable to recover it. 00:38:54.199 [2024-12-09 10:49:38.728375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.199 [2024-12-09 10:49:38.728419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.728607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.728684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.728885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.728914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.729112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.729179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.729395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.729429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.729599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.729631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.729802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.729867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.730171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.730273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.730551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.730760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.731080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.731145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.731427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.731457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.731687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.731759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.731949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.732015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.732228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.732258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.732412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.732489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.732679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.732714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.732932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.732962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.733097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.733175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.733436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.733515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.733677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.733712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.733939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.733969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.734060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.734261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.734326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.734558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.734588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.734773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.734812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.734922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.734968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.735206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.735236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.735385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.735414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.735564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.735599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.735801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.735832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.735945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.736003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.736261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.736326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.736467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.736497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.736593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.200 [2024-12-09 10:49:38.736630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.200 qpair failed and we were unable to recover it. 00:38:54.200 [2024-12-09 10:49:38.736841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.736907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.737215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.737244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.737438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.737473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.737683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.737786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.737908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.738104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.738387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.738452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.738692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.738734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.738853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.738887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.739083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.739120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.739352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.739382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.739548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.739587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.739737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.739773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.739902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.739932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.740042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.740072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.740245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.740312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.740552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.740754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.740827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.741168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.741335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.741716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.741873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.741922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.742135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.742200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.742407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.742448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.742621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.742882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.742920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.743094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.743123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.743281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.743315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.743503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.743537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.743701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.743746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.743880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.743909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.744066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.744131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.744328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.744357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.744488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.744564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.744777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.744812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.744922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.744951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.745128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.201 [2024-12-09 10:49:38.745162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.201 qpair failed and we were unable to recover it. 00:38:54.201 [2024-12-09 10:49:38.745262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.745295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.745421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.745450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.745559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.745588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.745786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.745872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.746026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.746059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.746190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.746464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.746529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.746706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.746742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.746855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.746906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.747142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.747208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.747415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.747477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.747674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.747751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.747963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.747996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.748138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.748171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.748336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.748367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.748519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.748554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.748700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.748742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.748867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.748902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.749039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.749105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.749407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.749473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.749707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.749745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.750844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.750930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.751178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.751213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.751444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.751483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.751713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.751802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.752104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.752308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.752337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.752507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.752833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.752900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.753127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.753156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.753353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.753434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.753664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.753699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.753856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.753890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.754032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.754088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.754314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.754348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.754476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.754508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.754691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.754773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.754936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.754966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.755076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.202 [2024-12-09 10:49:38.755105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.202 qpair failed and we were unable to recover it. 00:38:54.202 [2024-12-09 10:49:38.755266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.755308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.755490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.755563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.755782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.755817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.755931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.755996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.756199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.756232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.756386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.756415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.756551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.756623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.756846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.756876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.757021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.757049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.757245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.757566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.757632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.757854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.757884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.758049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.758114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.758306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.758339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.758475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.758504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.758688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.758800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.758916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.758945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.759142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.759188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.759339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.759389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.759608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.759656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.759774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.759806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.759949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.760057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.760087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.760293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.760506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.760536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.760649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.760679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.760849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.760881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.761939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.761968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.762113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.762156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.762299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.762335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.762506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.762553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.762683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.762714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.762890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.203 [2024-12-09 10:49:38.762918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.203 qpair failed and we were unable to recover it. 00:38:54.203 [2024-12-09 10:49:38.763033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.763206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.763415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.763572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.763792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.763934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.763963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.764132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.764199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.764404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.764469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.764704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.764740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.764857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.764885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.765043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.765078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.765212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.765262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.765454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.765490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.766870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.766906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.767079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.767116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.767263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.767297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.767477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.767512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.767730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.767793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.767912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.767946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.768083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.768115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.768303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.768344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.768521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.768555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.768711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.768782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.768900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.768930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.769080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.769144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.769512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.769798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.769831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.769959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.770002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.770168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.770233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.770514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.770579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.770794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.770824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.770956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.771005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.771195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.771273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.204 qpair failed and we were unable to recover it. 00:38:54.204 [2024-12-09 10:49:38.771514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.204 [2024-12-09 10:49:38.771555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.772849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.772884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.773085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.773151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.773438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.773504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.773715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.773783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.773889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.773919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.774048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.774076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.774205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.774233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.774421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.774458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.774701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.774805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.774922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.774951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.775207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.775500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.775529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.775763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.775819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.775937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.775968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.776144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.776172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.776350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.776385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.776578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.776642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.776849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.776877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.776974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.777003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.777210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.777253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.777417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.777455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.777611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.777675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.777861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.777889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.778025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.778054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.778245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.778288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.778435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.778469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.778589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.778618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.778776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.778808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.780056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.780130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.780367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.780396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.780623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.780687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.780831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.780859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.780965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.780999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.781166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.781321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.781355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.781517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.781546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.781713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.781801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.205 qpair failed and we were unable to recover it. 00:38:54.205 [2024-12-09 10:49:38.782022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.205 [2024-12-09 10:49:38.782064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.782259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.782288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.782408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.782441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.782594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.782628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.782777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.782807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.782918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.782948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.783133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.783199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.783430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.783460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.783559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.783613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.783825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.783903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.784106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.784136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.784320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.784353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.784494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.784566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.784786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.784828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.784937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.784989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.785224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.785427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.785460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.785656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.785737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.785932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.785999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.786275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.786304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.786472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.786506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.786755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.786826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.786959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.786989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.787118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.787146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.787406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.787440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.787568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.787598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.787713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.787751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.788031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.788095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.788345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.788375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.788531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.788609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.788838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.789018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.789071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.789393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.789584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.789649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.789848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.789878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.790039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.790068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.790182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.790215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.790410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.790719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.206 [2024-12-09 10:49:38.790763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.206 qpair failed and we were unable to recover it. 00:38:54.206 [2024-12-09 10:49:38.791735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.791789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.791919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.791948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.792109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.792143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.792344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.792408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.792673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.792897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.792925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.793056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.793111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.793321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.793356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.793472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.793507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.793679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.793714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.793890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.793918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.794065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.794093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.794210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.794291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.794552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.794617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.794855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.794890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.795022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.795090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.795352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.795583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.795791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.795821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.795920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.795948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.796081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.796110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.796227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.796277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.796455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.796490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.796617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.796646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.796817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.796847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.797008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.797075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.797350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.797390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.797565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.797630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.797842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.797873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.797966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.797996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.798186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.798217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.798365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.798405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.798548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.798582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.798750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.798783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.798893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.798923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.799136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.799446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.799475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.799669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.799750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.799906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.207 [2024-12-09 10:49:38.799936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.207 qpair failed and we were unable to recover it. 00:38:54.207 [2024-12-09 10:49:38.800105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.800294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.800472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.800511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.800660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.800691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.800835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.800864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.802233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.802486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.802517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.802629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.802846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.802875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.803030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.803060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.803261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.803326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.803534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.803600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.803814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.803845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.803960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.804186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.804399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.804609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.804799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.804942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.804971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.805141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.805206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.805483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.805548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.805785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.805816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.805935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.806185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.806220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.806335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.806368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.806543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.806580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.806718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.806777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.806916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.806945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.807120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.807183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.807431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.807460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.807570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.807608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.807819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.807849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.807964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.807993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.808124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.808153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.808317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.808390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.808614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.808643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.808796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.808826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.808982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.809045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.809277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.208 [2024-12-09 10:49:38.809307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.208 qpair failed and we were unable to recover it. 00:38:54.208 [2024-12-09 10:49:38.809410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.809639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.809750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.809779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.809873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.809903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.810053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.810118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.810295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.810326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.810441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.810471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.810581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.810609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.810781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.810811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.811007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.811105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.811379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.811449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.811685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.811715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.811846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.811876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.811984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.812014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.812148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.812177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.812301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.812331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.812507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.812542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.209 [2024-12-09 10:49:38.812676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.209 [2024-12-09 10:49:38.812741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.209 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.812872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.813061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.813126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.813335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.813365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.813518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.813554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.813740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.813800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.813914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.813943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.814164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.814229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.814485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.814556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.814766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.814796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.814913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.814942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.815198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.815264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.815476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.815505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.815638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.815667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.815814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.815844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.815997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.816882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.816911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.490 qpair failed and we were unable to recover it. 00:38:54.490 [2024-12-09 10:49:38.817811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.490 [2024-12-09 10:49:38.817846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.817999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.818195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.818392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.818584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.818784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.818951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.818981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.819164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.819229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.819478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.819542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.819824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.819854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.819959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.819988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.820150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.820215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.820456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.820486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.820694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.820800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.820927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.820957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.821120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.821150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.821377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.821446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.821687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.821760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.821908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.821936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.822125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.822190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.822435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.822498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.822705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.822741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.822874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.822903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.823043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.823106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.823291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.823319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.823488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.823552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.823814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.823855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.824036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.824068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.824204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.824270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.824491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.824555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.824770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.824799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.824909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.824938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.825124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.825197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.825465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.825643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.825707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.491 [2024-12-09 10:49:38.825896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.491 [2024-12-09 10:49:38.825926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.491 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.826024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.826052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.826214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.826278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.826554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.826618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.826863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.826892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.826984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.827018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.827151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.827222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.827443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.827480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.827711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.827799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.827932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.827962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.828103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.828131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.828307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.828531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.828595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.828838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.828867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.828994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.829058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.829341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.829404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.829691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.829727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.829861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.829889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.830054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.830122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.830335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.830364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.830594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.830814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.830844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.830970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.830998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.831157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.831221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.831509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.831572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.831853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.831882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.832084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.832148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.832349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.832696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.832733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.832928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.832991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.833237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.833300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.833523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.833551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.833657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.833755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.833987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.834051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.834295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.834323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.834474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.834538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.834793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.834859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.492 [2024-12-09 10:49:38.835191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.492 [2024-12-09 10:49:38.835220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.492 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.835459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.835523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.835800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.835866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.836096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.836125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.836283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.836359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.836654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.836718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.836889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.836918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.837108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.837172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.837461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.837525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.837801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.837835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.838011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.838076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.838324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.838387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.838666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.838693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.838894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.838960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.839224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.839288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.839547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.839575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.839777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.840020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.840265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.840294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.840482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.840545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.840787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.840853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.841105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.841133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.841283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.841346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.841585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.841649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.841921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.841950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.842129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.842192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.842472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.842547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.842797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.842827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.842991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.843054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.843249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.843555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.843619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.843847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.843877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.844007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.844071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.844329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.844358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.844519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.493 [2024-12-09 10:49:38.844583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.493 qpair failed and we were unable to recover it. 00:38:54.493 [2024-12-09 10:49:38.844812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.844877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.845185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.845214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.845443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.845507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.845705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.845784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.845994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.846022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.846180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.846244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.846498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.846561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.846783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.846812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.847022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.847086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.847320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.847383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.847593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.847621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.847793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.848072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.848254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.848283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.848418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.848496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.848812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.849087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.849115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.849268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.849331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.849565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.849628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.849837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.849865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.849976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.850004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.850210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.850273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.850488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.850519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.850698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.850782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.850984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.851047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.851298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.851532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.851595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.851831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.851896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.852155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.852184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.852338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.852402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.852598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.852662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.852892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.852920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.853147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.853375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.853439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.853682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.853711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.853857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.494 [2024-12-09 10:49:38.853920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.494 qpair failed and we were unable to recover it. 00:38:54.494 [2024-12-09 10:49:38.854174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.854238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.854454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.854482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.854636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.854700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.854923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.854995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.855190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.855363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.855674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.855775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.856031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.856059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.856192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.856256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.856460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.856524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.856737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.856767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.856873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.856901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.857043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.857112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.857405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.857433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.857646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.857902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.857930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.858028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.858056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.858189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.858271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.858552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.858616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.860368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.860445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.860765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.860833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.861052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.861116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.861340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.861369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.861523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.861585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.861876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.862170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.862199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.862359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.862422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.862639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.862702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.862939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.862969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.863081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.863145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.863361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.863424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.863601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.495 [2024-12-09 10:49:38.863629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.495 qpair failed and we were unable to recover it. 00:38:54.495 [2024-12-09 10:49:38.863764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.863826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.864112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.864176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.864408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.864437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.864617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.864681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.864881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.864946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.865183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.865317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.865591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.865654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.865846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.865875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.865977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.866005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.866220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.866283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.866595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.866623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.868385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.868459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.868768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.868835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.869092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.869122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.869286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.869350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.869563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.869626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.869845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.870021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.870084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.870292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.870355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.870639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.870703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.870905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.870933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.871085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.871149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.871362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.871390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.871575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.871651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.871874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.871903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.872053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.872081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.872188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.872256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.872457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.872520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.872737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.872772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.872983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.873199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.873262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.873475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.873511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.873687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.873958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.874021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.874261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.874289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.874444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.874507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.874766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.874832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.875060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.875089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.496 qpair failed and we were unable to recover it. 00:38:54.496 [2024-12-09 10:49:38.875266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.496 [2024-12-09 10:49:38.875328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.875624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.875686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.875888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.875916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.876088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.876161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.876446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.876509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.876730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.876759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.876898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.876961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.877205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.877268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.877512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.877540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.877749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.877814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.878032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.878096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.878300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.878330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.878500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.878571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.878803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.879103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.879396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.879677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.879757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.879964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.879992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.880130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.880198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.880515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.880583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.880821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.880851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.880973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.881034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.881319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.881385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.881696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.881761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.882277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.882341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.882586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.882614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.882818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.882885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.883165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.883436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.883684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.883924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.883998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.884302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.884330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.884583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.884647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.884858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.884886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.885025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.885054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.885205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.885270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.885496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.885560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.885805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.885834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.885998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.497 [2024-12-09 10:49:38.886063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.497 qpair failed and we were unable to recover it. 00:38:54.497 [2024-12-09 10:49:38.886278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.886326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.886519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.886548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.886797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.886864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.887138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.887340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.887508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.887727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.887863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.887988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.888121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.888284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.888514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.888786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.888920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.888949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.889140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.889204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.889426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.889490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.889686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.889788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.889915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.889958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.890223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.890294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.890505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.890571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.890816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.890847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.890963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.891018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.891206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.891235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.891369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.891438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.891659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.891739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.891919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.891949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.892097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.892160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.892436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.892500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.892761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.892790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.892897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.892925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.893103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.893175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.893455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.893529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.893791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.893820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.893958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.894018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.894253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.894281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.894466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.894530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.894787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.894816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.894978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.895007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.895205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.895268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.498 qpair failed and we were unable to recover it. 00:38:54.498 [2024-12-09 10:49:38.895539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.498 [2024-12-09 10:49:38.895602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.895808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.895837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.895939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.895967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.896127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.896191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.896412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.896474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.896716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.896794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.896951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.896980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.897184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.897213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.897404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.897468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.897710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.897790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.897900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.897928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.898033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.898062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.898367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.898431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.898784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.898814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.899008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.899072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.899314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.899378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.899697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.899784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.899947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.900010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.900551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.900614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.900826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.900855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.901018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.901083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.901385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.901413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.901672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.901753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.901945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.902208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.902236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.902332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.902411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.902581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.902645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.902919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.903051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.903115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.903324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.903386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.903690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.903784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.903936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.903965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.904191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.904255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.499 [2024-12-09 10:49:38.904572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.499 [2024-12-09 10:49:38.904636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.499 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.904868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.904897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.905055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.905118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.905403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.905560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.905624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.905807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.905836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.905978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.906007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.906185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.906249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.906593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.906656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.906969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.906999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.907149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.907213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.907477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.907540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.907795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.907824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.907919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.907947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.908191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.908255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.908515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.908578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.908812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.908840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.908946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.909003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.909305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.909333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.909492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.909555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.909846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.909875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.909990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.910018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.910212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.910275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.910571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.910634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.910868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.910897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.911003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.911061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.911336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.911410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.911674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.911702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.911841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.911870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.912002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.912066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.912357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.912386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.912886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.912915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.913056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.913085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.913237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.913300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.913556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.913621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.913954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.913983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.914268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.914332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.914641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.914704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.914915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.500 [2024-12-09 10:49:38.914944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.500 qpair failed and we were unable to recover it. 00:38:54.500 [2024-12-09 10:49:38.915074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.915139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.915379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.915443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.915697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.915733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.915897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.916115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.916178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.916457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.916485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.916648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.916711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.916921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.916949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.917132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.917160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.917297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.917360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.917535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.917599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.917923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.918117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.918180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.918461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.918524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.918803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.918832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.919011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.919074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.919380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.919443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.919754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.919782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.919996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.920061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.920327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.920390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.920681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.920709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.920908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.920972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.921269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.921665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.921944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.921974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.922332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.922618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.922646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.922792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.922960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.922988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.923203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.923231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.923394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.923459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.923754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.923820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.924121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.924150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.924333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.924397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.924633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.924697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.924970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.924999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.925167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.925230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.925536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.925600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.925899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.925928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.501 [2024-12-09 10:49:38.926108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.501 [2024-12-09 10:49:38.926170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.501 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.926466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.926530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.926778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.926807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.926953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.927016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.927344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.927661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.927836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.927865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.928073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.928136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.928370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.928398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.928560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.928623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.928903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.928968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.929255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.929430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.929493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.929686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.929768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.929937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.929966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.930145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.930219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.930491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.930556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.930804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.930834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.931016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.931079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.931313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.931377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.931683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.931712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.931963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.932027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.932286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.932350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.932650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.932678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.932799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.932852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.933048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.933112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.933395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.933423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.933584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.933648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.933944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.934010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.934227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.934256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.934406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.934470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.934805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.934871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.935146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.935342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.935405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.935839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.936121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.936150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.936256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.936320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.936600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.936664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.936953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.936982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.502 [2024-12-09 10:49:38.937149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.502 [2024-12-09 10:49:38.937212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.502 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.937513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.937813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.937842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.938100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.938163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.938468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.938532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.938960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.939023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.939266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.939330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.939602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.939630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.939803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.939868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.940101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.940166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.940416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.940445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.940600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.940662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.940938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.941003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.941279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.941308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.941434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.941498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.941772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.941838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.942070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.942104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.942265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.942582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.942645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.942895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.942924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.943105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.943168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.943490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.943553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.943866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.943896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.944093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.944157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.944480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.944543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.944803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.944832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.944995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.945058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.945373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.945436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.945704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.945797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.945926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.945956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.946195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.946259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.946521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.946549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.946786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.947060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.947123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.947417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.947445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.947606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.947669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.947941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.948005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.948261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.948289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.948461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.503 [2024-12-09 10:49:38.948524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.503 qpair failed and we were unable to recover it. 00:38:54.503 [2024-12-09 10:49:38.948831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.948896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.949136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.949167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.949322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.949385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.949646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.949708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.949942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.949978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.950124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.950187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.950503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.950566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.950858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.950887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.951121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.951183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.951422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.951485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.951736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.951765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.951883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.951946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.952245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.952308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.952612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.952640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.952842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.952907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.953196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.953259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.953751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.953815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.954130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.954195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.954467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.954495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.954683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.954761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.954994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.955053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.955342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.955634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.955697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.956028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.956092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.956402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.956430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.956657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.956737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.957056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.957119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.957395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.957607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.957670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.957896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.957924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.958060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.958088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.958234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.958297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.958550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.958614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.958903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.504 [2024-12-09 10:49:38.958932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.504 qpair failed and we were unable to recover it. 00:38:54.504 [2024-12-09 10:49:38.959094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.959157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.959407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.959472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.959749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.959997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.960063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.960309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.960373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.960667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.960695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.960931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.960960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.961165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.961228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.961523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.961551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.961684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.962078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.962152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.962424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.962452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.962624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.962687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.962988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.963052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.963363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.963391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.963592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.963654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.963951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.964016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.964292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.964320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.964457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.964783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.964849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.965090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.965118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.965361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.965663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.965740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.966006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.966044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.966204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.966268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.966501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.966564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.966876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.967112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.967176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.967432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.967496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.967744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.967773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.967967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.968032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.968222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.968285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.968609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.968773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.968842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2245552 Killed "${NVMF_APP[@]}" "$@" 00:38:54.505 [2024-12-09 10:49:38.969126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.969189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.969506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.969535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 [2024-12-09 10:49:38.969768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.969842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.505 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:54.505 [2024-12-09 10:49:38.970155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.505 [2024-12-09 10:49:38.970219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.505 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:54.506 [2024-12-09 10:49:38.970436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.970465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.970618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.506 [2024-12-09 10:49:38.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.971014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.971078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.971376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.971404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.971652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.971714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.972029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.972094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.972437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.972647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.972711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.972994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.973058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.973320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.973348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.973477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.973550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.973822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.973888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.974181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.974210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.974350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.974413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.974644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.974709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.974984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.975013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.975205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.975268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.975485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.975813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.975842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.976121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.976372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.976432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.976678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.976705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.976871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.976933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2246108 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2246108 00:38:54.506 [2024-12-09 10:49:38.977198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.977232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.977422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2246108 ']' 00:38:54.506 [2024-12-09 10:49:38.977452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:54.506 [2024-12-09 10:49:38.977598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.977636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.506 [2024-12-09 10:49:38.977870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.977910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:54.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.506 [2024-12-09 10:49:38.978102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.978132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 10:49:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.506 [2024-12-09 10:49:38.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.978380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.978647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.978711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.979026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.979057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.979266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.979303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.979489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.979523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.506 qpair failed and we were unable to recover it. 00:38:54.506 [2024-12-09 10:49:38.979680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.506 [2024-12-09 10:49:38.979709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.979871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.979912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.980071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.980113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.980324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.980354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.980507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.980571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.980870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.980899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.981939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.982232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.982260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.982453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.982534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.982786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.982858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.983940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.983991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.984222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.984251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.984394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.984458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.984648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.984711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.984911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.984941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.985115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.985150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.985270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.985304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.985460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.985489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.985665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.985700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.985884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.985948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.986169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.986198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.986370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.986405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.986539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.986603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.986880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.986910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.987039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.987109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.987274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.987308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.987483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.987511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.987693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.987800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.987960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.988008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.507 [2024-12-09 10:49:38.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.507 [2024-12-09 10:49:38.988187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.507 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.988319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.988367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.988540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.988574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.988704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.988881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.988932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.989111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.989177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.989380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.989408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.989548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.989614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.989866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.989895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.989989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.990020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.990173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.990361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.990394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.990557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.990588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.990744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.990782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.990940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.991004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.991244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.991272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.991421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.991485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.991695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.991777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.991932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.991961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.992071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.992115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.992261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.992294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.992427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.992455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.992594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.992656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.992902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.992932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.993058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.993086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.993195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.993273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.993508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.993571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.993755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.993787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.993923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.993976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.994113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.994146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.994277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.994437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.994484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.994646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.994680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.994875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.994905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.995029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.508 [2024-12-09 10:49:38.995092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.508 qpair failed and we were unable to recover it. 00:38:54.508 [2024-12-09 10:49:38.995327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.995390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.995613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.995642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.995752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.995781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.996010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.996074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.996271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.996300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.996401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.996429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.996649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.996683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.996890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.996924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.997080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.997143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.997384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.997451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.997613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.997662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.997814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.997844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.997960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.998040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.998260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.998288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.998434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.998468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.998637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.998708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.998931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.998959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.999178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.999420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.999454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.999622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.999651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:38.999822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:38.999887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.000127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.000193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.000383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.000412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.000568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.000654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.000900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.000929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.001817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.001882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.002121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.002153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.002288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.002322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.002469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.002503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.002825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.002860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.003000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.003060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.003265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.003293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.003446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.003510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.509 [2024-12-09 10:49:39.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.509 [2024-12-09 10:49:39.003793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.509 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.003913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.003941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.004942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.004970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.005140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.005178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.005384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.005458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.005682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.005711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.005878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.005933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.006904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.006933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.007145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.007211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.007441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.007469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.007633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.007701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.007887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.007922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.008911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.008940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.009134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.009162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.009311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.009381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.009515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.009549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.009696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.009731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.009907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.009972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.010170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.010233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.010436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.010465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.010615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.010649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.010887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.010962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.011207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.011236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.011386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.011449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.011567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.011601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.510 qpair failed and we were unable to recover it. 00:38:54.510 [2024-12-09 10:49:39.011717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.510 [2024-12-09 10:49:39.011754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.011857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.011886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.012071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.012135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.012337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.012366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.012533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.012589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.012805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.012870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.013101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.013272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.013420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.013588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.013781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.013986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.014050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.014263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.014292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.014388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.014646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.014709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.014870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.015048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.015119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.015316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.015350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.015513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.015541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.015694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.015772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.015980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.016052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.016277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.016306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.016520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.016716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.016791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.016969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.016997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.017139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.017173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.017311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.017388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.017603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.017635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.017802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.017839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.017981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.018057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.018265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.018294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.018394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.018424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.018615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.018649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.018773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.018803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.018953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.019009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.019241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.019308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.019496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.019525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.019655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.019764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.020030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.511 [2024-12-09 10:49:39.020236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.511 [2024-12-09 10:49:39.020266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.511 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.020417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.020451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.020589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.020656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.020916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.021072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.021135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.021319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.021524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.021553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.022032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.022097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.022328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.022358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.022477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.022513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.022756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.022823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.023040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.023070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.023194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.023379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.023413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.023545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.023574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.023752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.023817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.024020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.024086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.024243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.024273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.024452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.024517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.024737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.024789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.024914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.024944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.025136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.025283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.025471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.025649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.025822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.025988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.026017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.026176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.026213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.026356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.026392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.026533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.026562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.026688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.026716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.027877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.027911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.028082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.028157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.512 qpair failed and we were unable to recover it. 00:38:54.512 [2024-12-09 10:49:39.028303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.512 [2024-12-09 10:49:39.028332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.028436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.028633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.028695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.028919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.028947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.029078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.029126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.029311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.029383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.029629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.029693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.029852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.029880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.030831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.030890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.031064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.031155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.031411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.031446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.031637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.031704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.031967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.032127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.032290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.032504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.032706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.032861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.032894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.033125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.033189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.033388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.033416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.033589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.033653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.033909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.033944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.034087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.034121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.034285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.034318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.034461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.034494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.034626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.034654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.034820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.035078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.035144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.035340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.035491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.035564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.513 [2024-12-09 10:49:39.035767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.513 [2024-12-09 10:49:39.035802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.513 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.035969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.035997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.036136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.036169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.036309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.036342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.036505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.036534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.036680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.036715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.036847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.036877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.037932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.037995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.038164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.038193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.038295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.038324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.038464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.038497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.038666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.038695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.038735] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:38:54.514 [2024-12-09 10:49:39.038823] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:54.514 [2024-12-09 10:49:39.038873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.038971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.039231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.039287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.039474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.039513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.039701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.039785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.039987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.040048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.040283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.040310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.040475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.040549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.040755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.040823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.041034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.041063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.041164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.041213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.041414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.041449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.041605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.041634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.041812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.041878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.042080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.042154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.042346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.042375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.042490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.042543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.042756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.042822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.043030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.043059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.043198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.043232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.043341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.043375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.043517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.043546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.043742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.514 [2024-12-09 10:49:39.043808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.514 qpair failed and we were unable to recover it. 00:38:54.514 [2024-12-09 10:49:39.044014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.044084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.044233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.044262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.044366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.044394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.044611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.044673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.044879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.044908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.045091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.045163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.045299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.045337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.045516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.045645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.045921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.045985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.046174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.046205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.046335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.046386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.046501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.046535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.046680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.046709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.046864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.046898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.047036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.047071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.047234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.047263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.047437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.047501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.047704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.047785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.047882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.047911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.048939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.048967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.049139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.049172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.049280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.049317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.049458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.049486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.049616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.049664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.049841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.049870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.050001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.050180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.050246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.050417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.050481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.050673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.050702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.050883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.050917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.051067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.051102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.051279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.051310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.051468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.051504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.515 qpair failed and we were unable to recover it. 00:38:54.515 [2024-12-09 10:49:39.051668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.515 [2024-12-09 10:49:39.051701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.051834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.051862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.051988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.052018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.052195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.052260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.052467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.052495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.052666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.052747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.052944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.052978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.053130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.053160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.053283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.053337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.053475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.053508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.053620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.053648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.053789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.053857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.054068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.054138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.054342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.054371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.054476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.054533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.054748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.054801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.054942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.054970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.055137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.055173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.055343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.055491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.055519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.055646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.055695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.055891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.055921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.056031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.056060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.056184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.056212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.056407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.056474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.056673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.056702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.056848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.056918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.057148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.057452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.057481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.057619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.057653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.057800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.057835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.057983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.058135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.058322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.058693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.058883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.058917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.059065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.059093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.059193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.059236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.516 [2024-12-09 10:49:39.059354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.516 [2024-12-09 10:49:39.059388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.516 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.059553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.059582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.059733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.059766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.059905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.059942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.060076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.060105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.060221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.060250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.060461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.060525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.060736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.060765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.060903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.060937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.061106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.061283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.061456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.061625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.061968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.062037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.062213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.062278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.062503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.062629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.062660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.062862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.062897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.063065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.063094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.063262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.063295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.063404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.063437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.063633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.063758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.063851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.064089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.064161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.064308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.064336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.064482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.064546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.064787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.064854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.065056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.065085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.065214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.065260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.065400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.065469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.065698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.065781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.065941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.065997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.066217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.066251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.517 [2024-12-09 10:49:39.066529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.517 [2024-12-09 10:49:39.066558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.517 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.066777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.066807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.066942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.066972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.067118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.067152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.067409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.067645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.067800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.067865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.068948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.068998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.069196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.069260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.069492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.069520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.069664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.069951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.069985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.070141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.070170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.070316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.070512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.070547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.070739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.070772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.070894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.070927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.071095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.071171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.071321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.071349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.071523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.071589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.071781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.071848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.072084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.072115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.072235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.072270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.072445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.072510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.072739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.072768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.072914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.072979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.073189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.073225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.073349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.073510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.073755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.073820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.074027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.074056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.074198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.074236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.074380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.074451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.074654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.074682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.074843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.518 [2024-12-09 10:49:39.074908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.518 qpair failed and we were unable to recover it. 00:38:54.518 [2024-12-09 10:49:39.075082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.075270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.075404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.075600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.075766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.075933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.075993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.076201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.076265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.076463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.076492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.076622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.076699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.076920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.076984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.077188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.077217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.077323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.077384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.077674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.077913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.077942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.078040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.078115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.078341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.078405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.078608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.078682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.078888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.078917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.079092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.079155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.079433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.079461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.079622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.079685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.079850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.079879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.080008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.080082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.080322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.080618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.080646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.080774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.080840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.081074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.081137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.081307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.081335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.081466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.081518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.081744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.081809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.082040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.082069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.082214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.082278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.082503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.082566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.082743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.082773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.082901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.082959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.083194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.083258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.083428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.083456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.083586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.519 [2024-12-09 10:49:39.083860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.519 [2024-12-09 10:49:39.083925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.519 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.084130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.084159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.084308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.084372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.084597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.084660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.084882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.085082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.085261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.085324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.085543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.085607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.085785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.085814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.085986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.086050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.086281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.086310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.086462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.086525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.086740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.086806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.087032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.087060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.087163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.087235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.087459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.087523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.087731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.087760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.087933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.087997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.088319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.088383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.088638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.088667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.088860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.088925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.089181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.089244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.089501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.089529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.089702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.089781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.090031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.090095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.090403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.090432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.090634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.090698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.090921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.090990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.091251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.091283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.091454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.091518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.091799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.091864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.092130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.092158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.092361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.092424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.092701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.092795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.093092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.093121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.093265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.093579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.093642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.093873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.093902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.094090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.094153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.094421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.520 [2024-12-09 10:49:39.094485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.520 qpair failed and we were unable to recover it. 00:38:54.520 [2024-12-09 10:49:39.094768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.094823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.094933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.094962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.095133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.095197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.095502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.095531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.095777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.095842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.096044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.096120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.096386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.096414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.096551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.096615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.096852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.096917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.097240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.097455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.097518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.097796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.097861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.098082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.098110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.098305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.098368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.098643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.098706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.098936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.098965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.099121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.099185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.099442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.099505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.099791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.099820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.100026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.100091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.100344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.100407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.100624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.100652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.100804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.100870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.101107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.101170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.101405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.101434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.101634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.101698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.101865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.102082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.102110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.102245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.102308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.102616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.102679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.102897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.103119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.103182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.103401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.103657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.103685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.103883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.103948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.521 [2024-12-09 10:49:39.104255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.521 [2024-12-09 10:49:39.104319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.521 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.104560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.104589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.104788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.104853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.105114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.105179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.105425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.105453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.105600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.105663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.105898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.105963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.106169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.106197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.106341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.106409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.106675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.106779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.107031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.107060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.107262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.107327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.107606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.107670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.107962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.107991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.108150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.108213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.108498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.108562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.108810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.108839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.109040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.109103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.109347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.109411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.109647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.109711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.109960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.110024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.110278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.110342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.110570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.110634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.110951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.110980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.111236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.111300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.111604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.111783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.111859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.112156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.112220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.112498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.112653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.112717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.112997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.113060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.113302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.113507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.113835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.113900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.114136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.114363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.114427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.114755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.114810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.522 qpair failed and we were unable to recover it. 00:38:54.522 [2024-12-09 10:49:39.114998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.522 [2024-12-09 10:49:39.115026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.115172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.115246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.115499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.115563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.115845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.115875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.116139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.116201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.116437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.116500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.116807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.116836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.117065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.117129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.117468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.117790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.117819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.118002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.118066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.118325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.118389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.118582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.118610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.118727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.118757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.118877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.118941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.119223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.119252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.119434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.119499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.119844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.119909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.120215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.120243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.120415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.120481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.120759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.120823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.121079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.121219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.121248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.121383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.121570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.121598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.121866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.122156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.122408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.122436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.122571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.122635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.122937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.123003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.123237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.123265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.123396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.123558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.123586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.523 [2024-12-09 10:49:39.123695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.523 [2024-12-09 10:49:39.123730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.523 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.123974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.124295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.124547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.124576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.124762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.124827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.125106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.125168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.125468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.125507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.125781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.125846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.126150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.126474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.126507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.126642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.126706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.126989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.127272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.127439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.127592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.127762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.127932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.127960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.128135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.128163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.128293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.128321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.128512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.128674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.128702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.128874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.128903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.129078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.129262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.129471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.129594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.129985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.130013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.130137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.130165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.130307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.130371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.130640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.130703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.130925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.804 [2024-12-09 10:49:39.130958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.804 qpair failed and we were unable to recover it. 00:38:54.804 [2024-12-09 10:49:39.131203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.131266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.131575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.131603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.131791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.131820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.131931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.131959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.132240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.132273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.132541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.132774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.132840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.133108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.133136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.133296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.133360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.133611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.133675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.133963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.133991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.134192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.134256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.134576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.134639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.134932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.134967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.135138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.135203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.135499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.135563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.135815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.135844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.135989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.136053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.136304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.136370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.136663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.136691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.136889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.136953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.137205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.137269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.137526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.137554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.137673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.137750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.137974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.138038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.138340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.138509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.138572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.138858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.138924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.139221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.139249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.139439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.139502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.139801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.139830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.139985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.140024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.140190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.140254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.140552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.140615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.140877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.140915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.141079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.141141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.141365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.141428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.141710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.141751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.141975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.142038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.805 [2024-12-09 10:49:39.142261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.805 [2024-12-09 10:49:39.142333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.805 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.142644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.142672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.142926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.142991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.143157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.143220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.143495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.143523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.143662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.143742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.144051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.144125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.144421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.144567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.144639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.144991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.145056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.145321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.145349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.145589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.145652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.146037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.146349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.146377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.146625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.146689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.147016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.147080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.147366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.147395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.147643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.147707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.147944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.147974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.148195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.148224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.148357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.148424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.148715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.148815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.149091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.149120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.149351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.149415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.149634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.149696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.150018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.150046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.150245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.150309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.150490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.150553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.150853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.150882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.151169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.151233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.151462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.151525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.151827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.151856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.152094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.152158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.152445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.152519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.152837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.152866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.153088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.153151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.153450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.153513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.153818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.153847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.154060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.154123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.154355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.806 [2024-12-09 10:49:39.154418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.806 qpair failed and we were unable to recover it. 00:38:54.806 [2024-12-09 10:49:39.154654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.154862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.154926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.155146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.155210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.155538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.155586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.155890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.155919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.156101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.156164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.156416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.156445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.156603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.156667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.157079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.157183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.157556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.157591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.157905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.157941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.158128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.158213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.158523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.158556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.158769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.158861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.159049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:54.807 [2024-12-09 10:49:39.159195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.159282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.159583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.159616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.159892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.159968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.160274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.160350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.160707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.160751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.161082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.161157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.161492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.161566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.161891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.161925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.162178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.162260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.162569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.162643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.163011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.163066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.163439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.163515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.163911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.164266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.164324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.164638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.164713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.165038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.165113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.165428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.165462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.165740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.166005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.166090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.166374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.166413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.166714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.167065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.167139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.167457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.167490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.167783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.167816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.807 [2024-12-09 10:49:39.167967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.807 [2024-12-09 10:49:39.168037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.807 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.168453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.168528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.168887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.168963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.169317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.169391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.169703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.169746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.170095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.170171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.170480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.170554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.170852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.170885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.171139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.171214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.171542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.171617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.171929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.171962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.172195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.172269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.172601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.172676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.173012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.173046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.173346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.173421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.173741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.174158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.174191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.174512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.174586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.174942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.175018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.175322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.175354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.175565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.175640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.175965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.176039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.176355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.176388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.176616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.176689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.177145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.177220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.177580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.177613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.178000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.178074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.178395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.178813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.178847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.179028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.179103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.179473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.179549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.179842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.179876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.180077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.180153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.808 [2024-12-09 10:49:39.180433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.808 [2024-12-09 10:49:39.180507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.808 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.180833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.180867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.181158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.181487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.181562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.181911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.181996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.182324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.182398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.182771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.182848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.183158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.183192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.183389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.183477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.183812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.183890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.184208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.184241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.184522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.184597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.184976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.185051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.185360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.185393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.185652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.185763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.186044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.186120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.186445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.186477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.186713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.186804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.187125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.187200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.187558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.187590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.187961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.188037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.188372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.188448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.188799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.188850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.189189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.189263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.189614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.189689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.190004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.190037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.190261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.190335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.190697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.190791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.191147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.191553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.191640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.191986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.192061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.192358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.192391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.192691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.192807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.193133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.193206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.193544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.193577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.193939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.194014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.194337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.194703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.194745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.195049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.195124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.809 [2024-12-09 10:49:39.195469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.809 [2024-12-09 10:49:39.195544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.809 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.195834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.195877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.196076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.196160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.196481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.196556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.196897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.196931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.197277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.197350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.197703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.197802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.198152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.198186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.198508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.198583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.198972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.199049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.199379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.199412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.199814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.200104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.200180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.200526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.200559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.200947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.201022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.201376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.201449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.201829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.202218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.202293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.202597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.202672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.203050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.203100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.203463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.203537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.203868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.203945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.204275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.204308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.204650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.204739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.205049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.205123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.205464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.205497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.205835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.205909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.206213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.206288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.206640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.206673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.207044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.207119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.207453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.207553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.207915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.208224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.208299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.208630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.208704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.209034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.209067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.209353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.209427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.210102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.210136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.210414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.210488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.810 qpair failed and we were unable to recover it. 00:38:54.810 [2024-12-09 10:49:39.210803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.810 [2024-12-09 10:49:39.210882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.211238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.211317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.211645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.211737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.212077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.212150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.212529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.212909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.212986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.213327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.213403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.213747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.213812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.214180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.214255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.214615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.214693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.215056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.215089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.215381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.215454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.215795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.215870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.216107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.216140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.216346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.216705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.216799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.217128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.217419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.217497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.217811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.217894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.218236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.218269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.218595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.218669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.219034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.219381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.219414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.219753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.219830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.220267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.220574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.220608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.220844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.220922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.221228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.221301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.221669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.221905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.221939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.222254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.222330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.222609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.222647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.222847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.222931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.223295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.223369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.223717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.223789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.224150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.224224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.224550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.224625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.224901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.224935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.225107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.225189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.811 qpair failed and we were unable to recover it. 00:38:54.811 [2024-12-09 10:49:39.225479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.811 [2024-12-09 10:49:39.225555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.225823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.225857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.226047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.226123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.226497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.226569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.226912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.226946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.227274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.227366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.227611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.227660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.227816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.227853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.228028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.228061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.228240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.228455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.228489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.228689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.228730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.228916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.228949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.229128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.229160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.229316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.229350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.229576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.229739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.229774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.229952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.230150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.230245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.230538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.230571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.230776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.230854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.231222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.231296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.231543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.231576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.231751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.231791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.232142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.232217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.232526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.232559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.232875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.233111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.233151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.233441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.233475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.233747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.233786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.234015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.234062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.234339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.234373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.234593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.234639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.234805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.234851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.235027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.235060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.235271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.235347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.235709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.235809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.236167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.236200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.236436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.812 [2024-12-09 10:49:39.236510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.812 qpair failed and we were unable to recover it. 00:38:54.812 [2024-12-09 10:49:39.236840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.236917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.237332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.237554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.237629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.238011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.238050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.238357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.238755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.238811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.238952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.239003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.239203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.239236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.239389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.239428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.239610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.239648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.239881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.239915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.240196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.240271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.240627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.240700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.241060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.241093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.241275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.241313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.241508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.241547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.241780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.241814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.242091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.242166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.242494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.242569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.242865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.242898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.243080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.243118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.243288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.243326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.243549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.243582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.243775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.243822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.244028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.244102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.244471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.244528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.244882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.245103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.245142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.245348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.245382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.245537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.245576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.245827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.245867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.246159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.246192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.246543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.813 [2024-12-09 10:49:39.247001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.813 [2024-12-09 10:49:39.247047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.813 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.247325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.247553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.247774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.247815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.247994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.248027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.248246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.248319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.248781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.249087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.249119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.249412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.249451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.249697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.249749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.250042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.250076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.250336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.250375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.250595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.250634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.250868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.251116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.251190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.251504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.251577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.251934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.251968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.252245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.252284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.252500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.252796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.252833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.253048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.253087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.253462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.253538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.253851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.253885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.254108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.254146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.254393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.254648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.254680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.254955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.254994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.255343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.255419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.255762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.255796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.256075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.256114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.256346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.256385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.256560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.256597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.256780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.256819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.257109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.257184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.257493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.257527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.257746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.257779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f70000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.258034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.258082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.258329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.258376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.258510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.258566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.814 [2024-12-09 10:49:39.258705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.814 [2024-12-09 10:49:39.258765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.814 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.258930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.258967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.259137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.259466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:54.815 [2024-12-09 10:49:39.259599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-12-09 10:49:39.259597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 at runtime. 00:38:54.815 [2024-12-09 10:49:39.259620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:54.815 [2024-12-09 10:49:39.259626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 [2024-12-09 10:49:39.259634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:54.815 [2024-12-09 10:49:39.259760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.259789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.259912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.259940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.260105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.260134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.260298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.260326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.260502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.260550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.260735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.260927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.260956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.261139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.261186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.261389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.261436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.261598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.261627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.261674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:54.815 [2024-12-09 10:49:39.261789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.261736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:54.815 [2024-12-09 10:49:39.261819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.261790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:54.815 [2024-12-09 10:49:39.261795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:54.815 [2024-12-09 10:49:39.261992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.262165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.262581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.262749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.262916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.262945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.263085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.263114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.263289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.263336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.263489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.263525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.263662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.263691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.263870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.263918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.264116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.264323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.264504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.264663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.264837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.264957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.265132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.265180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.265354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.265384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.815 qpair failed and we were unable to recover it. 00:38:54.815 [2024-12-09 10:49:39.265547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.815 [2024-12-09 10:49:39.265576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.265713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.265760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.266035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.266141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.266479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.266548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.266903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.266938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.267125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.267160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.267368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.267403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.267564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.267598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.267749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.267780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.267928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.267977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.268170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.268222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.268433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.268487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.268597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.268627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.268777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.268812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.268968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.268997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.269152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.269201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.269347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.269396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.269496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.269689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.269718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.269887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.269915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.270092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.270149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.270379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.270427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.270561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.270590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.270773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.270808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.270968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.271412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.271551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.271709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.271921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.271950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.272107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.272162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.272326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.272374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.272539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.272569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.272702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.272765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.272894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.272923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.273058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.273087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.273245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.273274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.273433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.273463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.273596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.273625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.816 qpair failed and we were unable to recover it. 00:38:54.816 [2024-12-09 10:49:39.273789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.816 [2024-12-09 10:49:39.273820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.273983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.274861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.274999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.275838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.276966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.277932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.277962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.278099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.278128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.278288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.278318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.278478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.278508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.278671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.278701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.278868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.278917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.279025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.279059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.279246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.279294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.279522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.279552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.279698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.817 [2024-12-09 10:49:39.279732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.817 qpair failed and we were unable to recover it. 00:38:54.817 [2024-12-09 10:49:39.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.279934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.280962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.280991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.281153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.281182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.281345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.281379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.281485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.281514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.281674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.281703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.281882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.281929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.282812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.282861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.283081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.283325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.283485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.283652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.283840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.283956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.284148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.284368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.284522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.284943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.284993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.285206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.285377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.285510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.285674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.285843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.286012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.286123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.286153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.286312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.286341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.286448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.286477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.286610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.818 [2024-12-09 10:49:39.286639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.818 qpair failed and we were unable to recover it. 00:38:54.818 [2024-12-09 10:49:39.286799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.286982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.287011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.287214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.287382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.287411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.287590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.287619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.287832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.287883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.288929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.288958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.289123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.289289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.289481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.289655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.289846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.289980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.290177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.290360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.290522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.290684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.290903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.290932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.291105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.291322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.291510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.291666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.291859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.291985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.292174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.292348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.292560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.292745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.292934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.292963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.293099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.293128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.293296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.293326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.293469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.293499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.293657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.293809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.819 [2024-12-09 10:49:39.293873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.819 qpair failed and we were unable to recover it. 00:38:54.819 [2024-12-09 10:49:39.294007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.294174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.294328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.294477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.294663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.294825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.294856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.295924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.295978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.296173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.296226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.296385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.296414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.296548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.296578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.296743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.296773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.296924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.296984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.297167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.297327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.297528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.297664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.297831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.298160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.298353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.298515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.298704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.298855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.298884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.299929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.299958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.300090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.300119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.300282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.300311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.300500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.300529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.300680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.300709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.300888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.820 [2024-12-09 10:49:39.300938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.820 qpair failed and we were unable to recover it. 00:38:54.820 [2024-12-09 10:49:39.301075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.301266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.301428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.301590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.301779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.301939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.301968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.302098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.302128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.302269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.302298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.302409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.302438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.302674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.302703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.302852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.302882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.303921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.303951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.304992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.305132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.305363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.305527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.305693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.305916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.305969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.306206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.306259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.306392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.306421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.306614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.306748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.306778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.306931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.306993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.307173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.307226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.307433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.307541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.821 [2024-12-09 10:49:39.307570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.821 qpair failed and we were unable to recover it. 00:38:54.821 [2024-12-09 10:49:39.307704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.307804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.307996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.308049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.308227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.308281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.308487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.308516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.308650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.308679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.308870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.308924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.309158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.309210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.309386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.309437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.309541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.309570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.309737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.309768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.309917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.309969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.310107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.310136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.310300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.310489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.310518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.310681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.310715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.310863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.310892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.311054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.311084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.311221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.311274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.311435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.311464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.311595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.311624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.311841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.312025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.312076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.312291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.312343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.312509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.312538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.312698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.312741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.312931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.312982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.313155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.313403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.313453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.313621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.313650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.313751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.313781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.313967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.314018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.314190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.314243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.314396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.314451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.314563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.314592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.314800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.314854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.315038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.315089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.822 qpair failed and we were unable to recover it. 00:38:54.822 [2024-12-09 10:49:39.315298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.822 [2024-12-09 10:49:39.315348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.315487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.315516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.315687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.315716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.315889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.316136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.316358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.316532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.316665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.316860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.316989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.317180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.317375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.317564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.317734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.317925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.317954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.318137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.318190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.318344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.318395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.318531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.318560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.318694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.318736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.318872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.318924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.319861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.319890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.320916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.321099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.321152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.321392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.321442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.321660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.321688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.321874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.321927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.322166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.322216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.322366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.322416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.322546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.322575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.823 qpair failed and we were unable to recover it. 00:38:54.823 [2024-12-09 10:49:39.322713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.823 [2024-12-09 10:49:39.322749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.322931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.322994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.323144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.323197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.323361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.323520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.323548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.323710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.323747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.323887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.323917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.324079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.324268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.324321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.324481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.324511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.324645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.324675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.324843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.324894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.325104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.325234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.325402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.325592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.325784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.325947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.326008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.326230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.326282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.326451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.326485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.326617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.326646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.326808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.326870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.327051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.327103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.327274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.327327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.327490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.327519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.327655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.327686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.327817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.327872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.328054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.328106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.328398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.328606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.328635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.328747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.328776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.328957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.329013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.329249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.329303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.329456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.329486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.329620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.329649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.329807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.329860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.330044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.330094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.330296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.330348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.330483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.824 [2024-12-09 10:49:39.330513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.824 qpair failed and we were unable to recover it. 00:38:54.824 [2024-12-09 10:49:39.330643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.330672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.330866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.331019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.331073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.331249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.331300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.331499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.331528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.331631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.331660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.331811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.331862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.332014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.332068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.332269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.332406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.332434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.332595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.332624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.332786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.332847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.333891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.333920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.334952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.334981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.335069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.335098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.335272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.335324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.335447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.335645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.335674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.335866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.335917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.336120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.336172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.336300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.336353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.336490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.336519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.336657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.336686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.336838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.336868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.337066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.337239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.337586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.825 [2024-12-09 10:49:39.337748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.825 qpair failed and we were unable to recover it. 00:38:54.825 [2024-12-09 10:49:39.337879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.337908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.338071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.338100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.338269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.338297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.338427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.338456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.338615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.338644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.338827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.339064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.339116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.339280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.339330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.339484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.339513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.339688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.339717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.339889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.339943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.340964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.340993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.341156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.341185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.341317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.341346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.341512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.341541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.341665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.341693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.341827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.341862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.342851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.342984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.343857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.343994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.826 [2024-12-09 10:49:39.344023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.826 qpair failed and we were unable to recover it. 00:38:54.826 [2024-12-09 10:49:39.344191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.344220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.344377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.344406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.344572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.344763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.344956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.344985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.345148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.345178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.345337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.345523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.345552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.345687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.345717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.345883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.345936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.346882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.346977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.347964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.348964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.348993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.349126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.349155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.349294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.349323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.349456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.349485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.349647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.349676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.349842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.349872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.350031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.350060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.350248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.350277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.350460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.350489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.350689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.827 [2024-12-09 10:49:39.350867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.827 [2024-12-09 10:49:39.350918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.827 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.351069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.351121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.351268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.351321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.351485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.351514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.351654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.351683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.351871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.351923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.352110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.352139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.352274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.352430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.352459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.352603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.352632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.352816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.352867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.353077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.353131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.353249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.353303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.353465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.353494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.353627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.353656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.353843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.353897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.354114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.354165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.354342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.354394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.354569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.354598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.354782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.354833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.355028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.355092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.355246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.355296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.355466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.355495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.355670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.355835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.355865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.356814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.356866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.357887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.357938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.358074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.358125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.358262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.828 [2024-12-09 10:49:39.358291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.828 qpair failed and we were unable to recover it. 00:38:54.828 [2024-12-09 10:49:39.358460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.358489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.358650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.358679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.358849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.358878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.359045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.359097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.359277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.359330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.359518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.359546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.359682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.359711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.359877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.359929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.360070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.360121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.360310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.360363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.360520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.360548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.360709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.360766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.360956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.361188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.361242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.361461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.361513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.361683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.361712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.361848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.361909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.362091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.362142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.362376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.362427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.362594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.362622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.362802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.362855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.362985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.363045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.363225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.363276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.363507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.363559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.363688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.363716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.363867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.363920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.364085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.364114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.364274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.364302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.364459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.364512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.364669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.364699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.364837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.364892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.365927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.365956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.366091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.366120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.366253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.829 [2024-12-09 10:49:39.366282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.829 qpair failed and we were unable to recover it. 00:38:54.829 [2024-12-09 10:49:39.366413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.366442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.366577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.366606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.366767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.366797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.366907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.366936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.367885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.367914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.368857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.368909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.369063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.369258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.369415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.369607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.369795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.369952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.370855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.370987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.371223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.371410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.371571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.371739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.371909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.371964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.372062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.372092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.830 qpair failed and we were unable to recover it. 00:38:54.830 [2024-12-09 10:49:39.372228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.830 [2024-12-09 10:49:39.372257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.372384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.372413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.372570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.372598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.372734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.372763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.372894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.372924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.373160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.373318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.373347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.373511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.373540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.373717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.373752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.373912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.373966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.374125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.374177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.374359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.374411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.374570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.374766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.374916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.374968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.375176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.375333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.375488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.375675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.375830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.375992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.376184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.376346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.376505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.376696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.376866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.376895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.377866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.378083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.378272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.378460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.378856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.378997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.379050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.379204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.831 [2024-12-09 10:49:39.379255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.831 qpair failed and we were unable to recover it. 00:38:54.831 [2024-12-09 10:49:39.379415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.379444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.379568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.379596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.379727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.379757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.379892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.379921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.380112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.380275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.380466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.380617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.380779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.380958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.381183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.381377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.381566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.381734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.381890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.381945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.382100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.382152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.382342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.382396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.382624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.382809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.382864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.383041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.383092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.383287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.383338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.383470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.383499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.383664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.383693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.383877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.383929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.384170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.384233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.384455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.384507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.384639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.384668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.384818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.384874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.385107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.385292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.385441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.385639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.385859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.385996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.386024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.386205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.386256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.386465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.386495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.386628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.386657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.386800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.386859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.387006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.832 [2024-12-09 10:49:39.387057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.832 qpair failed and we were unable to recover it. 00:38:54.832 [2024-12-09 10:49:39.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.387236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.387363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.387392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.387557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.387586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.387717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.387752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.387913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.387943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.388095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.388284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.388475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.388636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.388849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.388985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.389136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.389340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.389673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.389844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.389873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.390906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.390936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.391067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.391096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.391255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.391284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.391416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.391445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.391607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.391640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.391816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.391869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.392095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.392154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.392393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.392446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.392644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.392673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.392858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.392908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.393132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.393185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.393337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.393388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.393514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.393543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.393704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.393738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.393884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.393937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.394073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.394127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.394288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.394317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.394447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.833 [2024-12-09 10:49:39.394476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.833 qpair failed and we were unable to recover it. 00:38:54.833 [2024-12-09 10:49:39.394641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.394670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.394804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.394834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.394972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.395818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.396963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.396992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.397156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.397347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.397536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.397691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.397858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.397995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.398347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.398509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.398696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.398880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.398909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.399966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.399995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.400133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.400162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.400325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.400354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.400517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.400546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.400678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.400707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.400866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.400919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.401121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.401261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.401309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.834 qpair failed and we were unable to recover it. 00:38:54.834 [2024-12-09 10:49:39.401406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.834 [2024-12-09 10:49:39.401435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.401599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.401801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.401855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.402884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.402913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.403910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.403940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.404130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.404321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.404482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.404654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.404845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.404980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.405033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.405221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.405269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:54.835 [2024-12-09 10:49:39.405451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.405480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:54.835 [2024-12-09 10:49:39.405603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.405633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:54.835 [2024-12-09 10:49:39.405804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.405852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:54.835 [2024-12-09 10:49:39.406046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.406075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f6c000b90 with addr=10.0.0.2, port=4420 00:38:54.835 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.406319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.406374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.406582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.406619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.406882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.406953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.407249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.407316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.407638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.407704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.408028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.408062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.408322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.835 [2024-12-09 10:49:39.408356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.835 qpair failed and we were unable to recover it. 00:38:54.835 [2024-12-09 10:49:39.408564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.408600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.408821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.408851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.408988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.409020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.409211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.409276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.409529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.409593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.409798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.409828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.410000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.410035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.410223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.410395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.410428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.410618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.410653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.410863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.410892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.411027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.411058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.411474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.411538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.411744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.411774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.411902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.411931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.412115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.412150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.412354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.412388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.412571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.412612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.412800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.412836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.413022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.413304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.413550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.413586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.413741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.413771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.413911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.413940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.414166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.414211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.414377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.414413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.414638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.414671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.414864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.414894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.415070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.415191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.415225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.415380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.415414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.415573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.415608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.415823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.415868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.416066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.416108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.416345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.416424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7f78000b90 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.416652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.416719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.416884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.417076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.417106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.836 [2024-12-09 10:49:39.417245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.836 [2024-12-09 10:49:39.417282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.836 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.417481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.417515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.417637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.417665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.417796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.417826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.417974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.418015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.418309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.418374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.418638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.418702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.418901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.418931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.419078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.419112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.419287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.419324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.419485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.419529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.419735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.419785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.419934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.420180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.420247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.420494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.420785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.420814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.420929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.420974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.421135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.421175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.421418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.421447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.421618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.421650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.421837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.421866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.422001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.422062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.422358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.422399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.422618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.422681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.422877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.422909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.423138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.423186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.423389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.423422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.423650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.423686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.423864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.423894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.424042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.424079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.424292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.424324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.424473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.424704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.424749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.424882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.424914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.425052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.425083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.425247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.425312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.425603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.425666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.425873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.425902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.426134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.426409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.837 [2024-12-09 10:49:39.426442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.837 qpair failed and we were unable to recover it. 00:38:54.837 [2024-12-09 10:49:39.426662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.426695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.426845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.426879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.427034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.427063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.427194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.427269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.427565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.427628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.427842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.427875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.427995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.428208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.428242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.428415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.428469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.428663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.428696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.428839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.428867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.428987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.429017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.429231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.429295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.429499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.429563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.429777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.429884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.429936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.430145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.430180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.430332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.430524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.430552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.430855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.430889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.431029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.431081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.431309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.431338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.431514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.431578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.431800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.431866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.432151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.432329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.432358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.432460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.432508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.432691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.432731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.432853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.432887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.433075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.433104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.433270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.433303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:54.838 [2024-12-09 10:49:39.433445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.433479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:54.838 [2024-12-09 10:49:39.433650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.433719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.838 [2024-12-09 10:49:39.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.433947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:54.838 [2024-12-09 10:49:39.434105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.434185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.434436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.434499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.434741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.838 [2024-12-09 10:49:39.434814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.838 qpair failed and we were unable to recover it. 00:38:54.838 [2024-12-09 10:49:39.435057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.435088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.435280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.435322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.435567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.435602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.435712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.435781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.435898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.435927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.436271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.436344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.436584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.436662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.436815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.436845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.436949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.436982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.437155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.437184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.437350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.437379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.437550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.437579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:54.839 [2024-12-09 10:49:39.437784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:54.839 [2024-12-09 10:49:39.437850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:54.839 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.438143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.438207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.438521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.438591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.438801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.438831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.439043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.439108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.439416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.439480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.439734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.105 [2024-12-09 10:49:39.439769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.105 qpair failed and we were unable to recover it. 00:38:55.105 [2024-12-09 10:49:39.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.439929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.440064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.440128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.440420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.440692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.440771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.440959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.440987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.441906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.441934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.442879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.442913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.443966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.443994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.444124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.444279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.444307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.444544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.444573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.444786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.444816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.444958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.444986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.445157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.445187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.445407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.445477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.445797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.445863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.446132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.446197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.446474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.446502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.446746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.446813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.447048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.447111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.447313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.447375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.447691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.447728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.447910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.447973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.106 [2024-12-09 10:49:39.448262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.106 [2024-12-09 10:49:39.448325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.106 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.448562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.448636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.448876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.449049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.449116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.449305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.449339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.449565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.449630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.449845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.449874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.450028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.450281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.450314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.450489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.450559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.450857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.450894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.451148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.451183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.451442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.451511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.451772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.451837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.452156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.452188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.452336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.452372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.452562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.452827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.452892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.453195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.453224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.453515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.453814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.454161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.454225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.454551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.454606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.454784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.454847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.455095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.455159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.455470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.455541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.455801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.455830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.456023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.456087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.456374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.456693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.456777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.456962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.457165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.457230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.457499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.457564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.457822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.457857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.457966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.457995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.458147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.458204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.458406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.458441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.458705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.459079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.459108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.459307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.107 [2024-12-09 10:49:39.459342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.107 qpair failed and we were unable to recover it. 00:38:55.107 [2024-12-09 10:49:39.459599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.459664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.459924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.459958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.460099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.460128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.460278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.460327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.460526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.460594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.460846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.460912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.461185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.461218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.461486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.461556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.461845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.461977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.462036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.462410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.462485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.462790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.462856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.463122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.463185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.463499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.463533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.463775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.463961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.463998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.464179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.464240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.464458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.464492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.464746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.464775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.464922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.464986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.465296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.465331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.465598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.465662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.465965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.465994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.466249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.466283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.466511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.466575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.466879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.466946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.467189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.467229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.467400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.467442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.467656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.467736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.467996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.468061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.468298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.468327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.468467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.468501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.468740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.468968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.469002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.469189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.469218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.469356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.469391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.469564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.469629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.470006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.470300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.470329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.108 [2024-12-09 10:49:39.470589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.108 [2024-12-09 10:49:39.470654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.108 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.470970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.470999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.471312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.471346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.471526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.471555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.471717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.471800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.472089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.472158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.472455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.472489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.472789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.472819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.473058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.473134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.473439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.473473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.473778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.473845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.474156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.474186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.474408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.474442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.474640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.474704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.475051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.475117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.475383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.475411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.475549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.475583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.475773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.475809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 Malloc0 00:38:55.109 [2024-12-09 10:49:39.475950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.475984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.476159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.476193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.109 [2024-12-09 10:49:39.476367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.476397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:55.109 [2024-12-09 10:49:39.476547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.476582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.109 [2024-12-09 10:49:39.476733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:55.109 [2024-12-09 10:49:39.476768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.476923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.477096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.477268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.477306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.477479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.477517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.477660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.477694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.477878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.477907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.478062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.478096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.478250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.478283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.478526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.478564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.478745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.478775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.478914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.478948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.479100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.479135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.479273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.479306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.479453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.479482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.479578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.109 [2024-12-09 10:49:39.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.109 [2024-12-09 10:49:39.479677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.109 qpair failed and we were unable to recover it. 00:38:55.109 [2024-12-09 10:49:39.479827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.479855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.480029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.480062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.480247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.480276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.480412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.480445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.480619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.480658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.480874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.480909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.481081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.481110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.481211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.481268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.481568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.481632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.482240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.482268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.482465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.482529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.482753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.482818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.483109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.483172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.483477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.483505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.483866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.484113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.484176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.484421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.484485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.484746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.484979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.485041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.485350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.485413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.485661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.485736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.485996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.486024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.486211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.486273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.486590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.486653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.487003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.487267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.487294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.487479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.487542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.487846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.487911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.488211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.488274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.488589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.488618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.488962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.489029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.489315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.489379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.110 qpair failed and we were unable to recover it. 00:38:55.110 [2024-12-09 10:49:39.489658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.110 [2024-12-09 10:49:39.489736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.490041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.490069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.490308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.490372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.490642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.490734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.490994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.491058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.491365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.491393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.491629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.491692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.492017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.492081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.492394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.492458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.111 [2024-12-09 10:49:39.492749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.492777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:55.111 [2024-12-09 10:49:39.492937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.493000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.111 [2024-12-09 10:49:39.493250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.493314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:55.111 [2024-12-09 10:49:39.493641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.493704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.493979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.494007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.494193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.494256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.494573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.494636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.494914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.494977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.495214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.495275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.495584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.495876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.495938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.496168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.496229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.496466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.496526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.496810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.496877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.497176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.497240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.497540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.497603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.497878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.498214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.498242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.498429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.498493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.498743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.498819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.499104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.499168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.499474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.499502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.499711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.499786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.500093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.500158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.500373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.500436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.500673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.500702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.500864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.500929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.501300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.501592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.111 [2024-12-09 10:49:39.501655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.111 qpair failed and we were unable to recover it. 00:38:55.111 [2024-12-09 10:49:39.501983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.502012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.502233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.502296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.502608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.502671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.502916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.502945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.503079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.503108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.503306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.503369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.503754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.504065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.504128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.112 [2024-12-09 10:49:39.504421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.504450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:55.112 [2024-12-09 10:49:39.504630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.504694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.112 [2024-12-09 10:49:39.505033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:55.112 [2024-12-09 10:49:39.505098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.505398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.505461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.505717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.505932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.505993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.506302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.506364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.506651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.506714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.506962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.507024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.507234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.507294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.507584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.507647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.507991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.508057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.508337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.508401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.508716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.508796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.509104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.509132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.509404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.509469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.509713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.509792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.510086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.510150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.510388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.510416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.510564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.510627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.510895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.510959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.511241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.511315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.511602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.511631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.511776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.511843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.512126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.512190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.512484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.512547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.512835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.512864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.513061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.513125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.513358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.513421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.513736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.112 [2024-12-09 10:49:39.513800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.112 qpair failed and we were unable to recover it. 00:38:55.112 [2024-12-09 10:49:39.514100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.514128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.514414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.514477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.514775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.514839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.515137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.515200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.515498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.515525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.515710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.515801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.516079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.516143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.113 [2024-12-09 10:49:39.516417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.516481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:55.113 [2024-12-09 10:49:39.516796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.516826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.113 [2024-12-09 10:49:39.517030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.517094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:55.113 [2024-12-09 10:49:39.517381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.517445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.517697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.517771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.517991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.518019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.518246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.518310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.518590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.518653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.518909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.518971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.519331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.519638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.519702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.520009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.520073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.520383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.520665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.520761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.521087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.521151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.521434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.521462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.521572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.521634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.522019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.522270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.522333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.522655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.522683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.522961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.523024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.523309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.523372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.523633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.523696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.524025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:55.113 [2024-12-09 10:49:39.524059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefa5d0 with addr=10.0.0.2, port=4420 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 [2024-12-09 10:49:39.524162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:55.113 [2024-12-09 10:49:39.533116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.113 [2024-12-09 10:49:39.533368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.113 [2024-12-09 10:49:39.533438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.113 [2024-12-09 10:49:39.533478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.113 [2024-12-09 10:49:39.533509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.113 [2024-12-09 10:49:39.533591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.113 qpair failed and we were unable to recover it. 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:55.113 10:49:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2245583 00:38:55.113 [2024-12-09 10:49:39.542671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.542914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.542981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.543019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.543049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.543123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.552695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.552920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.552987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.553024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.553056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.553128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.562762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.563016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.563082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.563118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.563149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.563222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.572513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.572741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.572807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.572844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.572875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.572950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.582531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.582752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.582819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.582856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.582887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.582961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.592552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.592766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.592834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.592871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.592903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.592976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.602673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.602951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.603017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.603067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.603100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.603173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.612752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.612965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.613032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.613069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.613100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.613175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.622777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.622973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.623036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.623072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.623104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.623177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.632824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.633057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.633120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.633156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.633185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.633257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.642830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.643055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.643119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.643156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.643186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.114 [2024-12-09 10:49:39.643272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.114 qpair failed and we were unable to recover it. 00:38:55.114 [2024-12-09 10:49:39.652824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.114 [2024-12-09 10:49:39.653039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.114 [2024-12-09 10:49:39.653107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.114 [2024-12-09 10:49:39.653144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.114 [2024-12-09 10:49:39.653174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.653247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.662840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.663061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.663126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.663163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.663193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.663264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.672872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.673097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.673162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.673199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.673230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.673303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.682946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.683188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.683252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.683291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.683322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.683395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.692925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.693149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.693211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.693248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.693282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.693354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.702922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.703114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.703178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.703214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.703247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.703321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.712989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.713205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.713266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.713303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.713335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.713407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.723111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.723351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.723416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.723453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.723484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.723556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.733115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.733342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.733403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.733452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.733485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.733554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.743036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.743151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.743186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.743207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.743224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.743262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.115 [2024-12-09 10:49:39.752756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.115 [2024-12-09 10:49:39.752877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.115 [2024-12-09 10:49:39.752909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.115 [2024-12-09 10:49:39.752929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.115 [2024-12-09 10:49:39.752946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.115 [2024-12-09 10:49:39.752985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.115 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.763210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.763440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.763507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.763545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.763576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.763648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.773206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.773399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.773468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.773505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.773537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.773625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.783228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.783436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.783500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.783536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.783568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.783640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.793230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.793450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.793514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.793551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.793583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.793655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.803312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.803542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.803608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.803646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.803677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.803815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.813379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.813577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.813640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.813678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.813709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.813803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.823342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.823584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.823650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.376 [2024-12-09 10:49:39.823687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.376 [2024-12-09 10:49:39.823718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.376 [2024-12-09 10:49:39.823812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.376 qpair failed and we were unable to recover it. 00:38:55.376 [2024-12-09 10:49:39.833398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.376 [2024-12-09 10:49:39.833611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.376 [2024-12-09 10:49:39.833677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.833714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.833772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.833846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.843466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.843682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.843771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.843811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.843843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.843916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.853459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.853668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.853745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.853787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.853819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.853892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.863494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.863699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.863776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.863828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.863861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.863934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.873562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.873778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.873844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.873882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.873913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.873985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.883599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.883818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.883883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.883921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.883952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.884024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.893385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.893605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.893671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.893708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.893774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.893817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.903368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.903547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.903613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.903650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.903681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.903787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.913418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.913630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.913695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.913749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.913804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.913838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.923560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.923805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.923835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.923852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.923867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.923900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.933626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.933831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.933862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.933879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.933893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.933927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.943807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.944060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.944127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.377 [2024-12-09 10:49:39.944164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.377 [2024-12-09 10:49:39.944195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.377 [2024-12-09 10:49:39.944268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.377 qpair failed and we were unable to recover it. 00:38:55.377 [2024-12-09 10:49:39.953805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.377 [2024-12-09 10:49:39.954015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.377 [2024-12-09 10:49:39.954079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:39.954117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:39.954148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:39.954220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:39.963894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:39.964132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:39.964195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:39.964231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:39.964263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:39.964345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:39.973881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:39.974099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:39.974165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:39.974201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:39.974232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:39.974307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:39.983882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:39.984091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:39.984154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:39.984190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:39.984222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:39.984299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:39.993948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:39.994152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:39.994219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:39.994278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:39.994312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:39.994387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:40.003916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:40.004018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:40.004046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:40.004062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:40.004077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:40.004123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:40.014182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:40.014435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:40.014506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:40.014544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:40.014575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:40.014650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.378 [2024-12-09 10:49:40.024112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.378 [2024-12-09 10:49:40.024328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.378 [2024-12-09 10:49:40.024405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.378 [2024-12-09 10:49:40.024443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.378 [2024-12-09 10:49:40.024475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.378 [2024-12-09 10:49:40.024548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.378 qpair failed and we were unable to recover it. 00:38:55.638 [2024-12-09 10:49:40.034075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.638 [2024-12-09 10:49:40.034268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.638 [2024-12-09 10:49:40.034343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.638 [2024-12-09 10:49:40.034380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.638 [2024-12-09 10:49:40.034412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.638 [2024-12-09 10:49:40.034500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.638 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.044173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.044391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.044454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.044491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.044521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.044592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.054209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.054419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.054485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.054524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.054556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.054630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.064054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.064237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.064302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.064341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.064372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.064444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.074268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.074489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.074554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.074591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.074622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.074695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.084115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.084332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.084397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.084434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.084465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.084537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.094301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.094502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.094537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.094557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.094574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.094613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.104298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.104510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.104575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.104612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.104644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.104716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.114372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.114591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.114655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.114691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.114743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.114822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.124432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.124665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.124745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.124801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.124835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.124909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.134413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.134607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.134670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.134706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.134766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.134841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.144444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.144633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.144697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.144751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.144787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.144859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.154472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.154694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.154781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.154821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.154853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.154925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.164574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.639 [2024-12-09 10:49:40.164811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.639 [2024-12-09 10:49:40.164875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.639 [2024-12-09 10:49:40.164913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.639 [2024-12-09 10:49:40.164945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.639 [2024-12-09 10:49:40.165030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.639 qpair failed and we were unable to recover it. 00:38:55.639 [2024-12-09 10:49:40.174566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.174785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.174849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.174885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.174917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.174989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.184580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.184798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.184863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.184900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.184932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.185004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.194611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.194830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.194893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.194930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.194961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.195036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.204713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.204969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.205032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.205068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.205099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.205174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.214769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.214985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.215051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.215088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.215119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.215191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.224740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.224941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.225006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.225043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.225074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.225147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.234784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.235026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.235096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.235134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.235165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.235236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.244873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.245109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.245174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.245211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.245242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.245315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.254869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.255082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.255147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.255198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.255232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.255306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.264849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.265055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.265119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.265155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.265187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.265261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.275334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.275560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.275622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.275658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.275690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.275779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.640 [2024-12-09 10:49:40.285107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.640 [2024-12-09 10:49:40.285343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.640 [2024-12-09 10:49:40.285408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.640 [2024-12-09 10:49:40.285445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.640 [2024-12-09 10:49:40.285476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.640 [2024-12-09 10:49:40.285549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.640 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.295074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.295281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.295345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.295382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.295414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.295500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.305107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.305324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.305386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.305424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.305455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.305526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.315020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.315274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.315336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.315373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.315405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.315476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.325101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.325311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.325373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.325408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.325439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.325511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.335095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.335299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.335362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.335398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.335430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.335503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.345129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.345369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.345434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.345471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.345501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.345574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.355177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.355397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.355460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.355496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.355528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.355600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.365263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.365468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.365532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.365568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.365599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.365669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.375263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.375473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.375535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.375573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.375605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.375677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.385257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.385446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.385509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.385559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.385593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.385665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.395295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.395491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.395555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.395591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.395623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.395695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.405384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.901 [2024-12-09 10:49:40.405598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.901 [2024-12-09 10:49:40.405662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.901 [2024-12-09 10:49:40.405698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.901 [2024-12-09 10:49:40.405752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.901 [2024-12-09 10:49:40.405830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.901 qpair failed and we were unable to recover it. 00:38:55.901 [2024-12-09 10:49:40.415407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.415598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.415662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.415699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.415747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.415823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.425415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.425636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.425701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.425755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.425789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.425875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.435474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.435665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.435746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.435788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.435820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.435892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.445562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.445799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.445864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.445903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.445935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.446008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.455536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.455763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.455827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.455863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.455895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.455968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.465602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.465807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.465870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.465908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.465940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.466013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.475580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.475786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.475852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.475889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.475920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.475992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.485708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.485926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.485991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.486028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.486059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.486132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.495671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.495891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.495958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.495996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.496027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.496101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.505736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.505939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.506001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.506037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.506068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.506141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.515759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.515955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.516020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.516071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.516104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.516177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.525861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.526075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.526140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.526176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.526208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.526280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.535851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.536085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.536148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.536185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.902 [2024-12-09 10:49:40.536216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.902 [2024-12-09 10:49:40.536290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.902 qpair failed and we were unable to recover it. 00:38:55.902 [2024-12-09 10:49:40.545836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:55.902 [2024-12-09 10:49:40.546039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:55.902 [2024-12-09 10:49:40.546102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:55.902 [2024-12-09 10:49:40.546138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:55.903 [2024-12-09 10:49:40.546169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:55.903 [2024-12-09 10:49:40.546243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:55.903 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.555858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.556044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.556110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.556147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.556179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.556266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.565948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.566166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.566229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.566266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.566298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.566371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.575930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.576124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.576187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.576225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.576258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.576329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.585978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.586175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.586238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.586275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.586307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.586379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.595990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.596183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.596247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.596284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.596315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.596387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.606120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.606394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.606458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.606495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.606525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.606600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.616083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.616301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.616364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.616400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.616432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.616505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.626123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.626323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.626387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.626424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.626457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.626529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.636134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.636329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.636393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.636430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.636460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.163 [2024-12-09 10:49:40.636532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.163 qpair failed and we were unable to recover it. 00:38:56.163 [2024-12-09 10:49:40.646223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.163 [2024-12-09 10:49:40.646420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.163 [2024-12-09 10:49:40.646484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.163 [2024-12-09 10:49:40.646535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.163 [2024-12-09 10:49:40.646568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.646641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.656223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.656433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.656497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.656533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.656564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.656636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.666277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.666489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.666552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.666589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.666620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.666692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.676269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.676460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.676523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.676558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.676589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.676662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.686403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.686609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.686672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.686708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.686759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.686846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.696378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.696582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.696646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.696682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.696713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.696810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.706408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.706593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.706656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.706692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.706738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.706816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.716426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.716605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.716668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.716705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.716755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.716828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.726542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.726750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.726814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.726850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.726882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.726954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.736537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.736763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.736830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.736867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.736898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.736972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.746514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.746691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.746772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.746810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.746841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.746915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.756546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.756750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.756816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.756853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.756886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.756957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.766693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.766941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.767006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.767043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.767076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.164 [2024-12-09 10:49:40.767149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.164 qpair failed and we were unable to recover it. 00:38:56.164 [2024-12-09 10:49:40.776662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.164 [2024-12-09 10:49:40.776886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.164 [2024-12-09 10:49:40.776952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.164 [2024-12-09 10:49:40.777002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.164 [2024-12-09 10:49:40.777035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.165 [2024-12-09 10:49:40.777109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.165 qpair failed and we were unable to recover it. 00:38:56.165 [2024-12-09 10:49:40.786645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.165 [2024-12-09 10:49:40.786840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.165 [2024-12-09 10:49:40.786904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.165 [2024-12-09 10:49:40.786942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.165 [2024-12-09 10:49:40.786974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.165 [2024-12-09 10:49:40.787046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.165 qpair failed and we were unable to recover it. 00:38:56.165 [2024-12-09 10:49:40.796682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.165 [2024-12-09 10:49:40.796889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.165 [2024-12-09 10:49:40.796954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.165 [2024-12-09 10:49:40.796991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.165 [2024-12-09 10:49:40.797023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.165 [2024-12-09 10:49:40.797096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.165 qpair failed and we were unable to recover it. 00:38:56.165 [2024-12-09 10:49:40.806780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.165 [2024-12-09 10:49:40.807031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.165 [2024-12-09 10:49:40.807093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.165 [2024-12-09 10:49:40.807129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.165 [2024-12-09 10:49:40.807160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.165 [2024-12-09 10:49:40.807234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.165 qpair failed and we were unable to recover it. 00:38:56.425 [2024-12-09 10:49:40.816804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.425 [2024-12-09 10:49:40.817055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.425 [2024-12-09 10:49:40.817117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.425 [2024-12-09 10:49:40.817153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.425 [2024-12-09 10:49:40.817185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.425 [2024-12-09 10:49:40.817272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.425 qpair failed and we were unable to recover it. 00:38:56.425 [2024-12-09 10:49:40.826890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.425 [2024-12-09 10:49:40.827165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.425 [2024-12-09 10:49:40.827228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.425 [2024-12-09 10:49:40.827265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.425 [2024-12-09 10:49:40.827298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.425 [2024-12-09 10:49:40.827371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.425 qpair failed and we were unable to recover it. 00:38:56.425 [2024-12-09 10:49:40.836828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.425 [2024-12-09 10:49:40.837008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.425 [2024-12-09 10:49:40.837072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.425 [2024-12-09 10:49:40.837108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.425 [2024-12-09 10:49:40.837140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.837212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.846925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.847197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.847260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.847298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.847329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.847401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.856941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.857147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.857211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.857248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.857279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.857350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.866965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.867196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.867261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.867297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.867329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.867401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.877098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.877301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.877365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.877402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.877432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.877505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.887059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.887274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.887337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.887374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.887404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.887477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.897070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.897274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.897340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.897379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.897411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.897484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.907083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.907283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.907347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.907398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.907431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.907503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.917124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.917317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.917381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.917417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.917449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.917522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.927218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.927426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.927487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.927524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.927555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.927628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.937190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.937423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.937486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.937523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.937555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.937628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.947226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.947442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.947505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.947541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.947572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.947667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.426 [2024-12-09 10:49:40.957253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.426 [2024-12-09 10:49:40.957452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.426 [2024-12-09 10:49:40.957516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.426 [2024-12-09 10:49:40.957552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.426 [2024-12-09 10:49:40.957582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.426 [2024-12-09 10:49:40.957654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.426 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:40.967386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:40.967657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:40.967737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:40.967779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:40.967811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:40.967885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:40.977322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:40.977546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:40.977611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:40.977647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:40.977678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:40.977768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:40.987375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:40.987574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:40.987638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:40.987675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:40.987707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:40.987804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:40.997395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:40.997610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:40.997675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:40.997711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:40.997761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:40.997834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.007515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.007766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.007831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.007867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.007898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.007971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.017511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.017705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.017794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.017832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.017864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.017936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.027503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.027742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.027806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.027844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.027875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.027948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.037537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.037750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.037816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.037867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.037900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.037974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.047598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.047841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.047902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.047938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.047969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.048041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.057609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.057863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.057927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.057964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.057996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.058070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.067607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.067814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.067878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.067914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.067946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.068019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.427 [2024-12-09 10:49:41.077644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.427 [2024-12-09 10:49:41.077894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.427 [2024-12-09 10:49:41.077958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.427 [2024-12-09 10:49:41.077994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.427 [2024-12-09 10:49:41.078028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.427 [2024-12-09 10:49:41.078114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.427 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.087743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.087955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.088018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.088055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.088085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.088159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.097744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.097941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.098005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.098042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.098074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.098146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.107766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.107978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.108041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.108077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.108108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.108181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.117807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.118043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.118107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.118144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.118175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.118245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.127886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.128111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.128174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.128211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.128243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.128315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.137859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.138056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.138120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.138156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.138188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.138260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.148005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.148211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.148274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.148310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.148341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.148414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.157958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.158144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.158208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.158244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.158275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.158347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.168098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.168304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.168368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.168418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.688 [2024-12-09 10:49:41.168451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.688 [2024-12-09 10:49:41.168522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.688 qpair failed and we were unable to recover it. 00:38:56.688 [2024-12-09 10:49:41.178035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.688 [2024-12-09 10:49:41.178240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.688 [2024-12-09 10:49:41.178303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.688 [2024-12-09 10:49:41.178339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.178369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.178442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.188055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.188246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.188310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.188347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.188379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.188451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.198097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.198280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.198344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.198379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.198411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.198484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.208199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.208428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.208490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.208527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.208559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.208646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.218205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.218440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.218506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.218543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.218574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.218645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.228209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.228424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.228488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.228524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.228556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.228628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.238234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.238464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.238529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.238565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.238597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.238670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.248356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.248570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.248636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.248673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.248704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.248797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.258353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.258555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.258619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.258656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.258688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.258776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.268352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.268533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.268598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.268635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.268666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.268753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.278349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.278548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.278613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.278649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.278680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.278767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.288449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.288667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.689 [2024-12-09 10:49:41.288750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.689 [2024-12-09 10:49:41.288792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.689 [2024-12-09 10:49:41.288825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.689 [2024-12-09 10:49:41.288898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.689 qpair failed and we were unable to recover it. 00:38:56.689 [2024-12-09 10:49:41.298430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.689 [2024-12-09 10:49:41.298703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.690 [2024-12-09 10:49:41.298781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.690 [2024-12-09 10:49:41.298832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.690 [2024-12-09 10:49:41.298866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.690 [2024-12-09 10:49:41.298939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.690 qpair failed and we were unable to recover it. 00:38:56.690 [2024-12-09 10:49:41.308465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.690 [2024-12-09 10:49:41.308664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.690 [2024-12-09 10:49:41.308751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.690 [2024-12-09 10:49:41.308794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.690 [2024-12-09 10:49:41.308827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.690 [2024-12-09 10:49:41.308899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.690 qpair failed and we were unable to recover it. 00:38:56.690 [2024-12-09 10:49:41.318495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.690 [2024-12-09 10:49:41.318716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.690 [2024-12-09 10:49:41.318797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.690 [2024-12-09 10:49:41.318834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.690 [2024-12-09 10:49:41.318865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.690 [2024-12-09 10:49:41.318937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.690 qpair failed and we were unable to recover it. 00:38:56.690 [2024-12-09 10:49:41.328580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.690 [2024-12-09 10:49:41.328765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.690 [2024-12-09 10:49:41.328794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.690 [2024-12-09 10:49:41.328811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.690 [2024-12-09 10:49:41.328825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.690 [2024-12-09 10:49:41.328859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.690 qpair failed and we were unable to recover it. 00:38:56.690 [2024-12-09 10:49:41.338616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.690 [2024-12-09 10:49:41.338839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.690 [2024-12-09 10:49:41.338902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.690 [2024-12-09 10:49:41.338940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.690 [2024-12-09 10:49:41.338971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.690 [2024-12-09 10:49:41.339044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.690 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.348597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.348827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.348897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.348934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.348967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.349042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.358610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.358827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.358894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.358932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.358964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.359038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.368786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.369022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.369087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.369124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.369156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.369228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.378668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.378900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.378966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.379002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.379034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.379106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.388787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.389022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.389089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.389126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.389158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.389230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.398751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.398961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.399025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.399063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.399094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.399167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.408837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.409048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.409111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.409147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.409180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.409252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.418842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.419025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.419088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.419125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.419157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.419228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.952 qpair failed and we were unable to recover it. 00:38:56.952 [2024-12-09 10:49:41.428861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.952 [2024-12-09 10:49:41.429057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.952 [2024-12-09 10:49:41.429121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.952 [2024-12-09 10:49:41.429172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.952 [2024-12-09 10:49:41.429206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.952 [2024-12-09 10:49:41.429280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.438909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.439127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.439189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.439225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.439256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.439328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.448947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.449149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.449214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.449250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.449281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.449352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.458946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.459136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.459200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.459236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.459267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.459338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.469016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.469207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.469269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.469306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.469338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.469408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.479008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.479209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.479272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.479308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.479340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.479412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.489119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.489347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.489407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.489444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.489474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.489548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.499105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.499316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.499379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.499416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.499452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.499535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.509131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.509349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.509415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.509452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.509483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.509555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.519226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.519498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.519563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.519601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.519632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.519705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.529268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.529528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.529593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.529630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.529661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.529764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.539278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.539494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.539559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.539595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.539627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.539702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.549248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.549442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.549506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.549542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.549575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.549647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.953 [2024-12-09 10:49:41.559279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.953 [2024-12-09 10:49:41.559487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.953 [2024-12-09 10:49:41.559550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.953 [2024-12-09 10:49:41.559600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.953 [2024-12-09 10:49:41.559634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.953 [2024-12-09 10:49:41.559708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.953 qpair failed and we were unable to recover it. 00:38:56.954 [2024-12-09 10:49:41.569352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.954 [2024-12-09 10:49:41.569552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.954 [2024-12-09 10:49:41.569615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.954 [2024-12-09 10:49:41.569651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.954 [2024-12-09 10:49:41.569682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.954 [2024-12-09 10:49:41.569778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.954 qpair failed and we were unable to recover it. 00:38:56.954 [2024-12-09 10:49:41.579397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.954 [2024-12-09 10:49:41.579618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.954 [2024-12-09 10:49:41.579682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.954 [2024-12-09 10:49:41.579719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.954 [2024-12-09 10:49:41.579770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.954 [2024-12-09 10:49:41.579844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.954 qpair failed and we were unable to recover it. 00:38:56.954 [2024-12-09 10:49:41.589424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.954 [2024-12-09 10:49:41.589673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.954 [2024-12-09 10:49:41.589752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.954 [2024-12-09 10:49:41.589795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.954 [2024-12-09 10:49:41.589826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.954 [2024-12-09 10:49:41.589898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.954 qpair failed and we were unable to recover it. 00:38:56.954 [2024-12-09 10:49:41.599429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:56.954 [2024-12-09 10:49:41.599633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:56.954 [2024-12-09 10:49:41.599693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:56.954 [2024-12-09 10:49:41.599751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:56.954 [2024-12-09 10:49:41.599788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:56.954 [2024-12-09 10:49:41.599862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:56.954 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.609523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.609754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.609817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.609855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.609887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.609959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.619509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.619711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.619793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.619831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.619864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.619936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.629531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.629786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.629849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.629887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.629919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.629992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.639568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.639787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.639850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.639887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.639918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.639991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.649650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.649914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.649978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.650016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.650047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.650120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.659610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.659825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.659892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.659928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.659960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.660032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.669670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.669883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.669947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.669984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.670016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.670087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.679698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.679916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.679976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.680014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.680045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.680117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.689830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.690120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.690182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.690231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.690266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.690351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.699828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.700041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.700100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.700136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.700166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.215 [2024-12-09 10:49:41.700240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.215 qpair failed and we were unable to recover it. 00:38:57.215 [2024-12-09 10:49:41.709846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.215 [2024-12-09 10:49:41.710047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.215 [2024-12-09 10:49:41.710112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.215 [2024-12-09 10:49:41.710148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.215 [2024-12-09 10:49:41.710179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.710252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.719847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.720063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.720128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.720165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.720195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.720269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.729947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.730180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.730245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.730282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.730313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.730384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.739916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.740128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.740192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.740228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.740260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.740333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.749957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.750177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.750243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.750281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.750321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.750404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.760036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.760236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.760300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.760338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.760370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.760442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.770122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.770344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.770407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.770444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.770475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.770548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.780188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.780439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.780503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.780540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.780571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.780643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.790085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.790279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.790343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.790380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.790411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.790483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.800162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.800398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.800462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.800499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.800530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.800602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.810210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.810455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.810519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.810556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.810588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.810661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.820193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.820402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.820465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.820516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.820551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.820625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.830236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.830443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.830507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.830544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.830575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.830647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.840267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.216 [2024-12-09 10:49:41.840487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.216 [2024-12-09 10:49:41.840553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.216 [2024-12-09 10:49:41.840589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.216 [2024-12-09 10:49:41.840621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.216 [2024-12-09 10:49:41.840694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.216 qpair failed and we were unable to recover it. 00:38:57.216 [2024-12-09 10:49:41.850349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.217 [2024-12-09 10:49:41.850561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.217 [2024-12-09 10:49:41.850625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.217 [2024-12-09 10:49:41.850662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.217 [2024-12-09 10:49:41.850693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.217 [2024-12-09 10:49:41.850818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.217 qpair failed and we were unable to recover it. 00:38:57.217 [2024-12-09 10:49:41.860358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.217 [2024-12-09 10:49:41.860592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.217 [2024-12-09 10:49:41.860658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.217 [2024-12-09 10:49:41.860695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.217 [2024-12-09 10:49:41.860740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.217 [2024-12-09 10:49:41.860817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.217 qpair failed and we were unable to recover it. 00:38:57.478 [2024-12-09 10:49:41.870333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.478 [2024-12-09 10:49:41.870538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.478 [2024-12-09 10:49:41.870602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.478 [2024-12-09 10:49:41.870639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.478 [2024-12-09 10:49:41.870670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.478 [2024-12-09 10:49:41.870760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.478 qpair failed and we were unable to recover it. 00:38:57.478 [2024-12-09 10:49:41.880421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.478 [2024-12-09 10:49:41.880631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.478 [2024-12-09 10:49:41.880694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.478 [2024-12-09 10:49:41.880747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.478 [2024-12-09 10:49:41.880782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.478 [2024-12-09 10:49:41.880855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.478 qpair failed and we were unable to recover it. 00:38:57.478 [2024-12-09 10:49:41.890501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.478 [2024-12-09 10:49:41.890752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.478 [2024-12-09 10:49:41.890819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.478 [2024-12-09 10:49:41.890856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.478 [2024-12-09 10:49:41.890887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.478 [2024-12-09 10:49:41.890970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.478 qpair failed and we were unable to recover it. 00:38:57.478 [2024-12-09 10:49:41.900470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.478 [2024-12-09 10:49:41.900684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.478 [2024-12-09 10:49:41.900771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.478 [2024-12-09 10:49:41.900810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.478 [2024-12-09 10:49:41.900841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.478 [2024-12-09 10:49:41.900913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.478 qpair failed and we were unable to recover it. 00:38:57.478 [2024-12-09 10:49:41.910523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.478 [2024-12-09 10:49:41.910792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.910856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.910893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.910924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.910998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.920570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.920841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.920907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.920946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.920985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.921057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.930620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.930860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.930925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.930962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.930994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.931068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.940569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.940778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.940844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.940881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.940912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.940985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.950610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.950829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.950895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.950954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.950991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.951067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.960682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.960904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.960977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.961014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.961045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.961118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.970793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.971051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.971114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.971150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.971181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.971255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.980746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.980949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.981022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.981059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.981091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.981164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:41.990819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:41.991025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:41.991088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:41.991126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:41.991159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:41.991236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:42.000794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:42.001000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:42.001067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:42.001104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:42.001134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:42.001207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:42.010890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:42.011116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:42.011180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:42.011218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:42.011249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:42.011323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:42.020959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:42.021165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:42.021229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:42.021266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:42.021298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:42.021370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:42.030905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:42.031109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:42.031173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:42.031208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:42.031240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.479 [2024-12-09 10:49:42.031312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.479 qpair failed and we were unable to recover it. 00:38:57.479 [2024-12-09 10:49:42.040924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.479 [2024-12-09 10:49:42.041124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.479 [2024-12-09 10:49:42.041190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.479 [2024-12-09 10:49:42.041227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.479 [2024-12-09 10:49:42.041257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.041330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.051089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.051322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.051381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.051416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.051447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.051518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.061078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.061294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.061355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.061391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.061423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.061494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.071051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.071255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.071314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.071350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.071381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.071453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.081084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.081306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.081369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.081419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.081454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.081527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.091179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.091400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.091464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.091502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.091533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.091603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.101209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.101422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.101484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.101521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.101553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.101625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.111268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.111483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.111546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.111582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.111613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.111687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.121230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.121430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.121491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.121527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.121558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.121629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.480 qpair failed and we were unable to recover it. 00:38:57.480 [2024-12-09 10:49:42.131300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.480 [2024-12-09 10:49:42.131539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.480 [2024-12-09 10:49:42.131603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.480 [2024-12-09 10:49:42.131640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.480 [2024-12-09 10:49:42.131672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.480 [2024-12-09 10:49:42.131764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.741 qpair failed and we were unable to recover it. 00:38:57.741 [2024-12-09 10:49:42.141317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.741 [2024-12-09 10:49:42.141545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.741 [2024-12-09 10:49:42.141609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.741 [2024-12-09 10:49:42.141647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.741 [2024-12-09 10:49:42.141678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.741 [2024-12-09 10:49:42.141765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.151341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.151547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.151607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.151644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.151675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.151765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.161380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.161590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.161654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.161690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.161736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.161816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.171501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.171764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.171827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.171864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.171897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.171977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.181468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.181683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.181759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.181799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.181831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.181906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.191502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.191700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.191784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.191822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.191853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.191926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.201505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.201716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.201797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.201834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.201865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.201938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.211465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.211684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.211765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.211818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.211852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.211935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.221635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.221853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.221918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.221956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.221986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.222058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.231616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.231841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.231905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.231942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.231973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.232051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.241668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.241885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.241949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.241986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.242020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.242092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.251698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.251943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.252007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.252043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.252074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.252155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.261763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.261980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.262043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.262080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.262112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.262185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.742 [2024-12-09 10:49:42.271763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.742 [2024-12-09 10:49:42.271970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.742 [2024-12-09 10:49:42.272036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.742 [2024-12-09 10:49:42.272073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.742 [2024-12-09 10:49:42.272114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.742 [2024-12-09 10:49:42.272198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.742 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.281773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.281963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.282029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.282065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.282097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.282169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.291887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.292103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.292168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.292206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.292238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.292311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.301839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.302037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.302112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.302151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.302184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.302256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.311935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.312141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.312206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.312242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.312275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.312347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.322232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.322447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.322511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.322548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.322580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.322655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.332197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.332454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.332519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.332555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.332587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.332659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.342142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.342350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.342415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.342466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.342500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.342572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.352231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.352443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.352506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.352543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.352574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.352647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.362071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.362256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.362320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.362357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.362389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.362460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.372203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.372441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.372504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.372540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.372572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.372645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.382167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.382392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.382456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.382493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.382524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.382596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:57.743 [2024-12-09 10:49:42.392196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:57.743 [2024-12-09 10:49:42.392428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:57.743 [2024-12-09 10:49:42.392491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:57.743 [2024-12-09 10:49:42.392527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:57.743 [2024-12-09 10:49:42.392559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:57.743 [2024-12-09 10:49:42.392632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:57.743 qpair failed and we were unable to recover it. 00:38:58.003 [2024-12-09 10:49:42.402238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.003 [2024-12-09 10:49:42.402445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.003 [2024-12-09 10:49:42.402509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.003 [2024-12-09 10:49:42.402546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.003 [2024-12-09 10:49:42.402576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.003 [2024-12-09 10:49:42.402647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.003 qpair failed and we were unable to recover it. 00:38:58.003 [2024-12-09 10:49:42.412347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.003 [2024-12-09 10:49:42.412576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.003 [2024-12-09 10:49:42.412640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.003 [2024-12-09 10:49:42.412676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.003 [2024-12-09 10:49:42.412707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.003 [2024-12-09 10:49:42.412799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.003 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.422254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.422458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.422522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.422559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.422591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.422665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.432297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.432498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.432573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.432613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.432645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.432717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.442362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.442569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.442632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.442669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.442702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.442799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.452430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.452660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.452741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.452782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.452814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.452893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.462418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.462623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.462686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.462737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.462773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.462846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.472528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.472744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.472806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.472856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.472889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.472963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.482476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.482690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.482772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.482811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.482842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.482915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.492572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.492857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.492922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.492959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.492991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.493065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.502568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.502781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.502847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.502885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.502916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.502990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.512603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.512822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.512885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.512922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.512955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.513039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.522586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.522801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.522865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.522903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.522934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.523008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.532675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.532972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.533037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.533074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.533105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.533179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.542681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.542888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.004 [2024-12-09 10:49:42.542954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.004 [2024-12-09 10:49:42.542991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.004 [2024-12-09 10:49:42.543022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.004 [2024-12-09 10:49:42.543096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.004 qpair failed and we were unable to recover it. 00:38:58.004 [2024-12-09 10:49:42.552717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.004 [2024-12-09 10:49:42.552948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.553013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.553050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.553082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.553154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.562709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.562926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.563003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.563043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.563075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.563148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.572834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.573064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.573128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.573164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.573197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.573270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.582797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.583011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.583075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.583112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.583144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.583217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.592863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.593075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.593135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.593171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.593203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.593277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.602899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.603113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.603178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.603229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.603263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.603336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.613008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.613230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.613295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.613333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.613363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.613436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.622946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.623142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.623207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.623244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.623276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.623347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.632986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.633171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.633236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.633274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.633305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.633377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.643012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.643227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.643292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.643328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.643359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.643432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.005 [2024-12-09 10:49:42.653116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.005 [2024-12-09 10:49:42.653343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.005 [2024-12-09 10:49:42.653406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.005 [2024-12-09 10:49:42.653442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.005 [2024-12-09 10:49:42.653474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.005 [2024-12-09 10:49:42.653546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.005 qpair failed and we were unable to recover it. 00:38:58.265 [2024-12-09 10:49:42.663129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.265 [2024-12-09 10:49:42.663378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.265 [2024-12-09 10:49:42.663443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.265 [2024-12-09 10:49:42.663481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.265 [2024-12-09 10:49:42.663514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.265 [2024-12-09 10:49:42.663587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.265 qpair failed and we were unable to recover it. 00:38:58.265 [2024-12-09 10:49:42.673137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.265 [2024-12-09 10:49:42.673378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.265 [2024-12-09 10:49:42.673442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.265 [2024-12-09 10:49:42.673479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.265 [2024-12-09 10:49:42.673510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.265 [2024-12-09 10:49:42.673582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.265 qpair failed and we were unable to recover it. 00:38:58.265 [2024-12-09 10:49:42.683139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.265 [2024-12-09 10:49:42.683336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.265 [2024-12-09 10:49:42.683399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.265 [2024-12-09 10:49:42.683435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.265 [2024-12-09 10:49:42.683468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.265 [2024-12-09 10:49:42.683540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.693222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.693462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.693546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.693585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.693617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.693689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.703245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.703448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.703512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.703549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.703580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.703652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.713255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.713439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.713502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.713539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.713571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.713642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.723283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.723479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.723542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.723578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.723609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.723682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.733378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.733585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.733647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.733697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.733753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.733832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.743377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.743597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.743660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.743697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.743744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.743822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.753388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.753585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.753652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.753689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.753738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.753819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.763440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.763627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.763690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.763749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.763787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.763862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.773491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.773688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.773765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.773805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.773837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.773909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.783502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.783701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.783780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.783818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.783850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.783922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.793493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.793677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.793761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.793802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.793833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.793907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.803542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.803785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.803849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.803885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.803917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.803988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.813590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.266 [2024-12-09 10:49:42.813830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.266 [2024-12-09 10:49:42.813896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.266 [2024-12-09 10:49:42.813934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.266 [2024-12-09 10:49:42.813965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.266 [2024-12-09 10:49:42.814039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.266 qpair failed and we were unable to recover it. 00:38:58.266 [2024-12-09 10:49:42.823585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.823802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.823881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.823920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.823952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.824026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.833638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.833852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.833916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.833952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.833985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.834058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.843688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.843947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.844013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.844050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.844082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.844155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.853762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.853974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.854036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.854073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.854105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.854177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.863757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.863966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.864029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.864079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.864112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.864185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.873768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.873975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.874039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.874075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.874106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.874180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.883831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.884018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.884082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.884118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.884149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.884221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.893902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.894141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.894205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.894241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.894271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.894343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.903930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.904140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.904204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.904242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.904273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.904347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.267 [2024-12-09 10:49:42.913964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.267 [2024-12-09 10:49:42.914173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.267 [2024-12-09 10:49:42.914236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.267 [2024-12-09 10:49:42.914274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.267 [2024-12-09 10:49:42.914305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.267 [2024-12-09 10:49:42.914378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.267 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.924006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.924189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.924251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.924288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.924321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.924392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.934093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.934319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.934383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.934420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.934452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.934525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.944115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.944364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.944430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.944467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.944500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.944574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.954189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.954422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.954499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.954537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.954568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.954641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.964195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.964393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.964457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.964493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.964524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.964597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.974277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.974497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.974560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.974598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.974629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.974700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.984240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.984469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.984532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.984570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.984602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.984675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:42.994273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:42.994459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:42.994522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:42.994573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:42.994607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:42.994681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:43.004239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:43.004445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:43.004512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:43.004548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:43.004580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:43.004653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:43.014389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:43.014630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:43.014693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.528 [2024-12-09 10:49:43.014745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.528 [2024-12-09 10:49:43.014781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.528 [2024-12-09 10:49:43.014855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.528 qpair failed and we were unable to recover it. 00:38:58.528 [2024-12-09 10:49:43.024395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.528 [2024-12-09 10:49:43.024596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.528 [2024-12-09 10:49:43.024660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.024697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.024751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.024830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.034404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.034603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.034668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.034705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.034757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.034835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.044398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.044604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.044667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.044704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.044754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.044829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.054474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.054672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.054746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.054786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.054817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.054890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.064479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.064675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.064753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.064793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.064826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.064899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.074504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.074687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.074773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.074811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.074842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.074915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.084515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.084738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.084816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.084855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.084887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.084960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.094776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.095011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.095074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.095110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.095142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.095215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.104582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.104786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.104852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.104888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.104920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.104992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.114701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.114944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.115008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.115044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.115074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.115146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.124695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.124916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.124984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.125033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.125067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.125139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.134831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.135052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.135115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.135151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.135183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.135254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.144711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.144937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.145000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.145036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.145068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.529 [2024-12-09 10:49:43.145141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.529 qpair failed and we were unable to recover it. 00:38:58.529 [2024-12-09 10:49:43.154808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.529 [2024-12-09 10:49:43.155053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.529 [2024-12-09 10:49:43.155117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.529 [2024-12-09 10:49:43.155154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.529 [2024-12-09 10:49:43.155186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.530 [2024-12-09 10:49:43.155260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.530 qpair failed and we were unable to recover it. 00:38:58.530 [2024-12-09 10:49:43.164825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.530 [2024-12-09 10:49:43.165036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.530 [2024-12-09 10:49:43.165100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.530 [2024-12-09 10:49:43.165137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.530 [2024-12-09 10:49:43.165168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.530 [2024-12-09 10:49:43.165242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.530 qpair failed and we were unable to recover it. 00:38:58.530 [2024-12-09 10:49:43.174920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.530 [2024-12-09 10:49:43.175135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.530 [2024-12-09 10:49:43.175199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.530 [2024-12-09 10:49:43.175237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.530 [2024-12-09 10:49:43.175269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.530 [2024-12-09 10:49:43.175342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.530 qpair failed and we were unable to recover it. 00:38:58.790 [2024-12-09 10:49:43.184895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.790 [2024-12-09 10:49:43.185113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.790 [2024-12-09 10:49:43.185178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.790 [2024-12-09 10:49:43.185215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.790 [2024-12-09 10:49:43.185246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.790 [2024-12-09 10:49:43.185318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.790 qpair failed and we were unable to recover it. 00:38:58.790 [2024-12-09 10:49:43.194929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.790 [2024-12-09 10:49:43.195127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.790 [2024-12-09 10:49:43.195190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.790 [2024-12-09 10:49:43.195227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.790 [2024-12-09 10:49:43.195258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.790 [2024-12-09 10:49:43.195330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.790 qpair failed and we were unable to recover it. 00:38:58.790 [2024-12-09 10:49:43.204984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.790 [2024-12-09 10:49:43.205172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.790 [2024-12-09 10:49:43.205235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.790 [2024-12-09 10:49:43.205271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.790 [2024-12-09 10:49:43.205302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.790 [2024-12-09 10:49:43.205374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.790 qpair failed and we were unable to recover it. 00:38:58.790 [2024-12-09 10:49:43.215045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.790 [2024-12-09 10:49:43.215256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.790 [2024-12-09 10:49:43.215332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.790 [2024-12-09 10:49:43.215370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.790 [2024-12-09 10:49:43.215403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.215474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.225024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.225230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.225294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.225331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.225362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.225434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.235042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.235242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.235305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.235342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.235373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.235446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.245086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.245278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.245340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.245377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.245408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.245481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.255186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.255387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.255452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.255502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.255537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.255609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.265196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.265422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.265486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.265523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.265555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.265627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.275237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.275424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.275488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.275524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.275555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.275627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.285236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.285455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.285519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.285555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.285587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.285660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.295302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.295559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.295622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.295658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.295689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.295778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.305293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.305521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.305585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.305622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.305653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.305745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.315306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.315517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.315580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.315616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.315647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.315736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.325375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.325561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.325623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.325660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.325691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.325779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.335436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.335681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.335761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.335799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.335831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.791 [2024-12-09 10:49:43.335905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.791 qpair failed and we were unable to recover it. 00:38:58.791 [2024-12-09 10:49:43.345430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.791 [2024-12-09 10:49:43.345615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.791 [2024-12-09 10:49:43.345698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.791 [2024-12-09 10:49:43.345754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.791 [2024-12-09 10:49:43.345790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.345864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.355485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.355717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.355795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.355832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.355863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.355936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.365482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.365690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.365766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.365804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.365836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.365908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.375580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.375792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.375856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.375893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.375925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.375997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.385587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.385788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.385852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.385901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.385935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.386009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.395613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.395815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.395879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.395916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.395947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.396019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.405614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.405824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.405889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.405926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.405957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.406029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.415690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.415924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.415987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.416023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.416056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.416128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.425690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.425902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.425965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.426000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.426031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.426105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:58.792 [2024-12-09 10:49:43.435751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:58.792 [2024-12-09 10:49:43.435946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:58.792 [2024-12-09 10:49:43.436010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:58.792 [2024-12-09 10:49:43.436046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:58.792 [2024-12-09 10:49:43.436077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:58.792 [2024-12-09 10:49:43.436149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:58.792 qpair failed and we were unable to recover it. 00:38:59.052 [2024-12-09 10:49:43.445737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.052 [2024-12-09 10:49:43.445925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.052 [2024-12-09 10:49:43.445989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.052 [2024-12-09 10:49:43.446025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.052 [2024-12-09 10:49:43.446056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.052 [2024-12-09 10:49:43.446129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.052 qpair failed and we were unable to recover it. 00:38:59.052 [2024-12-09 10:49:43.455873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.052 [2024-12-09 10:49:43.456089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.052 [2024-12-09 10:49:43.456163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.052 [2024-12-09 10:49:43.456200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.052 [2024-12-09 10:49:43.456232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.052 [2024-12-09 10:49:43.456303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.052 qpair failed and we were unable to recover it. 00:38:59.052 [2024-12-09 10:49:43.465844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.052 [2024-12-09 10:49:43.466072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.052 [2024-12-09 10:49:43.466134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.052 [2024-12-09 10:49:43.466171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.052 [2024-12-09 10:49:43.466203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.466276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.475842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.476045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.476121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.476159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.476191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.476265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.485906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.486117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.486180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.486217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.486248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.486320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.495987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.496250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.496314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.496350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.496381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.496454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.505971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.506165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.506230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.506267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.506299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.506372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.516054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.516302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.516369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.516418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.516452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.516524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.526044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.526229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.526293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.526330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.526361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.526433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.536124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.536330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.536394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.536431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.536462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.536534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.546088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.546277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.546340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.546376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.546407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.546479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.556168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.556375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.556438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.556474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.556506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.556580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.566123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.566306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.566368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.566405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.566436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.566508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.576232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.576454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.576517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.576554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.576585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.576658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.586245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.586452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.586516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.586553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.586584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.586656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.596252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.053 [2024-12-09 10:49:43.596460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.053 [2024-12-09 10:49:43.596523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.053 [2024-12-09 10:49:43.596560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.053 [2024-12-09 10:49:43.596592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.053 [2024-12-09 10:49:43.596665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.053 qpair failed and we were unable to recover it. 00:38:59.053 [2024-12-09 10:49:43.606318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.606509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.606585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.606624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.606656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.606742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.616386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.616590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.616653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.616690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.616734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.616812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.626366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.626569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.626634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.626670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.626702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.626789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.636424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.636624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.636687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.636739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.636776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.636848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.646470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.646673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.646756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.646812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.646846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.646920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.656533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.656776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.656841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.656878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.656909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.656982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.666497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.666691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.666770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.666809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.666840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.666913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.676546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.676749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.676815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.676852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.676883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.676955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.686581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.686809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.686874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.686910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.686942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.687014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.054 [2024-12-09 10:49:43.696683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.054 [2024-12-09 10:49:43.696906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.054 [2024-12-09 10:49:43.696969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.054 [2024-12-09 10:49:43.697005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.054 [2024-12-09 10:49:43.697037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.054 [2024-12-09 10:49:43.697111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.054 qpair failed and we were unable to recover it. 00:38:59.314 [2024-12-09 10:49:43.706689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.314 [2024-12-09 10:49:43.706916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.314 [2024-12-09 10:49:43.706979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.314 [2024-12-09 10:49:43.707016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.314 [2024-12-09 10:49:43.707046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.314 [2024-12-09 10:49:43.707119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.314 qpair failed and we were unable to recover it. 00:38:59.314 [2024-12-09 10:49:43.716659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.314 [2024-12-09 10:49:43.716863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.314 [2024-12-09 10:49:43.716929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.314 [2024-12-09 10:49:43.716965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.314 [2024-12-09 10:49:43.716996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.314 [2024-12-09 10:49:43.717068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.314 qpair failed and we were unable to recover it. 00:38:59.314 [2024-12-09 10:49:43.726705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.314 [2024-12-09 10:49:43.726921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.726984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.727020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.727052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.727122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.736851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.737097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.737172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.737211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.737243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.737316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.746780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.746990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.747053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.747089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.747122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.747195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.756823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.757023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.757090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.757126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.757159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.757234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.766839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.767032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.767097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.767134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.767165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.767240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.776949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.777239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.777303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.777339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.777394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.777468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.786894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.787098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.787163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.787200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.787231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.787302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.796954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.797139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.797204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.797241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.797273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.797344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.807022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.807206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.807270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.807307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.807339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.807410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.817095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.817319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.817382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.817418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.817450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.817522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.827095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.827325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.827389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.827425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.827457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.827530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.837090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.837331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.837394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.837431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.837463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.837537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.847131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.847338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.847400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.315 [2024-12-09 10:49:43.847437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.315 [2024-12-09 10:49:43.847468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.315 [2024-12-09 10:49:43.847540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.315 qpair failed and we were unable to recover it. 00:38:59.315 [2024-12-09 10:49:43.857223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.315 [2024-12-09 10:49:43.857425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.315 [2024-12-09 10:49:43.857490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.857526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.857556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.857629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.867219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.867419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.867497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.867536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.867568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.867642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.877245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.877468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.877533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.877570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.877601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.877674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.887285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.887491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.887552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.887587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.887618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.887690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.897396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.897612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.897676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.897714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.897764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.897839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.907380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.907612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.907671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.907708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.907770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.907848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.917396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.917594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.917654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.917690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.917736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.917812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.927416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.927612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.927675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.927711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.927759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.927833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.937452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.937650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.937714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.937770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.937803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.937876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.947522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.947736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.947799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.947836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.947867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.947940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.957403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.957610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.957674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.957710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.316 [2024-12-09 10:49:43.957760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.316 [2024-12-09 10:49:43.957836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.316 qpair failed and we were unable to recover it. 00:38:59.316 [2024-12-09 10:49:43.967572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.316 [2024-12-09 10:49:43.967810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.316 [2024-12-09 10:49:43.967872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.316 [2024-12-09 10:49:43.967911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.578 [2024-12-09 10:49:43.967943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.578 [2024-12-09 10:49:43.968019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.578 qpair failed and we were unable to recover it. 00:38:59.578 [2024-12-09 10:49:43.977644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.578 [2024-12-09 10:49:43.977906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.578 [2024-12-09 10:49:43.977971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.578 [2024-12-09 10:49:43.978007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.578 [2024-12-09 10:49:43.978038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.578 [2024-12-09 10:49:43.978117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.578 qpair failed and we were unable to recover it. 00:38:59.578 [2024-12-09 10:49:43.987644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.578 [2024-12-09 10:49:43.987879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.578 [2024-12-09 10:49:43.987944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.578 [2024-12-09 10:49:43.987992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.578 [2024-12-09 10:49:43.988024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.578 [2024-12-09 10:49:43.988098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.578 qpair failed and we were unable to recover it. 00:38:59.578 [2024-12-09 10:49:43.997660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.578 [2024-12-09 10:49:43.997925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.578 [2024-12-09 10:49:43.998010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.578 [2024-12-09 10:49:43.998049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.578 [2024-12-09 10:49:43.998080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.578 [2024-12-09 10:49:43.998154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.578 qpair failed and we were unable to recover it. 00:38:59.578 [2024-12-09 10:49:44.007687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.578 [2024-12-09 10:49:44.007906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.578 [2024-12-09 10:49:44.007984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.578 [2024-12-09 10:49:44.008022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.578 [2024-12-09 10:49:44.008053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.008126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.017789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.018009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.018072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.018109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.018141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.018226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.027804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.028013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.028077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.028114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.028146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.028219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.037788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.037987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.038049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.038087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.038132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.038205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.047859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.048045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.048105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.048141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.048173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.048246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.058042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.058263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.058324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.058360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.058391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.058462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.067929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.068179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.068242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.068278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.068309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.068380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.077982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.078188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.078252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.078289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.078320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.078392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.087995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.088191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.088255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.088292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.088323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.088394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.098129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.098344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.098407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.098443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.098475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.098547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.108190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.108395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.108455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.108492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.108523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.108597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.118144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.118353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.118429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.118466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.118497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.118569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.128190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.128400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.128477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.128516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.128547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.128620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.138305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.579 [2024-12-09 10:49:44.138536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.579 [2024-12-09 10:49:44.138600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.579 [2024-12-09 10:49:44.138637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.579 [2024-12-09 10:49:44.138668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.579 [2024-12-09 10:49:44.138768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.579 qpair failed and we were unable to recover it. 00:38:59.579 [2024-12-09 10:49:44.148221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.148427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.148490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.148528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.148560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.148632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.158266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.158449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.158513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.158550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.158582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.158653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.168291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.168505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.168571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.168609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.168654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.168745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.178421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.178646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.178711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.178768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.178801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.178885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.188358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.188569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.188633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.188670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.188702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.188790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.198386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.198583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.198641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.198677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.198708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.198806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.208400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.208617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.208681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.208718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.208767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.208840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.218524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.218772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.218835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.218872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.218903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.218987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.580 [2024-12-09 10:49:44.228511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.580 [2024-12-09 10:49:44.228736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.580 [2024-12-09 10:49:44.228795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.580 [2024-12-09 10:49:44.228831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.580 [2024-12-09 10:49:44.228863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.580 [2024-12-09 10:49:44.228937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.580 qpair failed and we were unable to recover it. 00:38:59.840 [2024-12-09 10:49:44.238498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.238760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.238827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.238865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.238898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.238970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.248546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.248753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.248820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.248857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.248888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.248962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.258607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.258896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.258976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.259015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.259047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.259122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.268595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.268791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.268855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.268892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.268924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.268998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.278680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.278938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.279003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.279039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.279070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.279146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.288667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.288884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.288948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.288985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.289017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.289090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.298786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.299007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.299070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.299107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.299152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.299227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.308795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.308988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.309062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.309100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.309131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.309205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.318825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.841 [2024-12-09 10:49:44.319031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.841 [2024-12-09 10:49:44.319095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.841 [2024-12-09 10:49:44.319130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.841 [2024-12-09 10:49:44.319163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.841 [2024-12-09 10:49:44.319237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.841 qpair failed and we were unable to recover it. 00:38:59.841 [2024-12-09 10:49:44.328794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.328983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.329049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.329086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.329117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.329190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.338902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.339119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.339183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.339219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.339250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.339321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.348888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.349078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.349142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.349179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.349211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.349282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.358905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.359094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.359158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.359195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.359225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.359295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.368961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.369138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.369202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.369238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.369271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.369343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.379059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.379272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.379336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.379373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.379404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.379475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.389030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.389242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.389320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.389359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.389390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.389463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.399080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.399262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.399327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.399364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.399395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.399467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.409115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.409312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.409378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.409414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.842 [2024-12-09 10:49:44.409446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.842 [2024-12-09 10:49:44.409517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.842 qpair failed and we were unable to recover it. 00:38:59.842 [2024-12-09 10:49:44.419219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.842 [2024-12-09 10:49:44.419425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.842 [2024-12-09 10:49:44.419488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.842 [2024-12-09 10:49:44.419525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.419556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.419629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.429244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.429490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.429555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.429592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.429637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.429711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.439247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.439461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.439522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.439560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.439591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.439663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.449241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.449436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.449500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.449536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.449567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.449640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.459365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.459564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.459628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.459664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.459696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.459787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.469357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.469615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.469681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.469719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.469771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.469844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.479405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.479624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.479690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.479743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.479780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.479853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:38:59.843 [2024-12-09 10:49:44.489353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:59.843 [2024-12-09 10:49:44.489535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:59.843 [2024-12-09 10:49:44.489598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:59.843 [2024-12-09 10:49:44.489635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:59.843 [2024-12-09 10:49:44.489667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:38:59.843 [2024-12-09 10:49:44.489755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:59.843 qpair failed and we were unable to recover it. 00:39:00.104 [2024-12-09 10:49:44.499461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.104 [2024-12-09 10:49:44.499667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.104 [2024-12-09 10:49:44.499747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.104 [2024-12-09 10:49:44.499791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.104 [2024-12-09 10:49:44.499822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.104 [2024-12-09 10:49:44.499895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.104 qpair failed and we were unable to recover it. 00:39:00.104 [2024-12-09 10:49:44.509527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.104 [2024-12-09 10:49:44.509764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.104 [2024-12-09 10:49:44.509830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.104 [2024-12-09 10:49:44.509868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.104 [2024-12-09 10:49:44.509900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.104 [2024-12-09 10:49:44.509974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.104 qpair failed and we were unable to recover it. 00:39:00.104 [2024-12-09 10:49:44.519480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.104 [2024-12-09 10:49:44.519676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.104 [2024-12-09 10:49:44.519768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.104 [2024-12-09 10:49:44.519809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.104 [2024-12-09 10:49:44.519840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.104 [2024-12-09 10:49:44.519915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.104 qpair failed and we were unable to recover it. 00:39:00.104 [2024-12-09 10:49:44.529541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.104 [2024-12-09 10:49:44.529774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.104 [2024-12-09 10:49:44.529838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.104 [2024-12-09 10:49:44.529876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.104 [2024-12-09 10:49:44.529907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.104 [2024-12-09 10:49:44.529979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.104 qpair failed and we were unable to recover it. 00:39:00.104 [2024-12-09 10:49:44.539615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.104 [2024-12-09 10:49:44.539894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.104 [2024-12-09 10:49:44.539960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.539997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.540028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.540101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.549589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.549787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.549852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.549889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.549920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.549993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.559624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.559863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.559927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.559964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.560008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.560082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.569642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.569853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.569917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.569953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.569984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.570059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.579742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.579968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.580032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.580069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.580101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.580184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.589433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.589551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.589586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.589605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.589648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.589741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.599832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.600045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.600108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.600145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.600177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.600259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.609804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.609984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.610051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.610087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.610118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.610202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.619889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.620115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.620179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.620215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.620247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.620319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.629981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.630178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.630241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.630278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.630309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.630380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.639963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.640207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.640272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.640308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.640352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.640435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.649939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.650183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.650267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.650305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.650336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.650409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.660024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.660249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.660313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.660350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.660380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.660453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.105 qpair failed and we were unable to recover it. 00:39:00.105 [2024-12-09 10:49:44.669986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.105 [2024-12-09 10:49:44.670184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.105 [2024-12-09 10:49:44.670245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.105 [2024-12-09 10:49:44.670282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.105 [2024-12-09 10:49:44.670313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.105 [2024-12-09 10:49:44.670386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.680039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.680268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.680330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.680366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.680398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.680470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.690030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.690230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.690294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.690330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.690375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.690447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.700123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.700339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.700402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.700440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.700473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.700544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.710186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.710395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.710460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.710497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.710529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.710602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.720190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.720408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.720471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.720508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.720539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.720612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.730167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.730400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.730464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.730502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.730533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.730606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.740278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.740494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.740558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.740595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.740626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.740698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.106 [2024-12-09 10:49:44.750239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.106 [2024-12-09 10:49:44.750466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.106 [2024-12-09 10:49:44.750530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.106 [2024-12-09 10:49:44.750567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.106 [2024-12-09 10:49:44.750599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.106 [2024-12-09 10:49:44.750671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.106 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.760319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.760539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.760606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.760644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.760675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.760772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.770328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.770568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.770631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.770669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.770700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.770789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.780453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.780674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.780770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.780810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.780842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.780915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.790410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.790603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.790667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.790703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.790752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.790828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.800454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.800642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.800704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.800770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.800804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.800876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.810517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.810751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.810825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.810862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.810893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.810968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.820594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.820852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.820916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.820953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.820998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.821072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.830546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.830777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.830841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.830878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.830910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.830982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.367 [2024-12-09 10:49:44.840543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.367 [2024-12-09 10:49:44.840766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.367 [2024-12-09 10:49:44.840832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.367 [2024-12-09 10:49:44.840869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.367 [2024-12-09 10:49:44.840900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.367 [2024-12-09 10:49:44.840973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.367 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.850566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.850778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.850843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.850879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.850910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.850983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.860703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.860939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.861001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.861037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.861067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.861145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.870762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.870972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.871032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.871067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.871098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.871171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.880671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.880896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.880960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.880997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.881028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.881100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.890769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.890964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.891026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.891063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.891094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.891166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.900840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.901051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.901113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.901150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.901181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.901252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.910791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.911008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.911083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.911120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.911154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.911226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.920810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.921005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.921068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.921104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.921136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.921210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.930912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.931128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.931194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.931230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.931261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.931333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.940939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.941142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.941204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.941239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.941270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.941342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.950960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.951152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.951215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.951251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.951295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.951370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.960952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.961163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.961226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.961261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.961291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.961362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.970982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.368 [2024-12-09 10:49:44.971187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.368 [2024-12-09 10:49:44.971249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.368 [2024-12-09 10:49:44.971286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.368 [2024-12-09 10:49:44.971317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.368 [2024-12-09 10:49:44.971388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.368 qpair failed and we were unable to recover it. 00:39:00.368 [2024-12-09 10:49:44.981138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.369 [2024-12-09 10:49:44.981342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.369 [2024-12-09 10:49:44.981405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.369 [2024-12-09 10:49:44.981442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.369 [2024-12-09 10:49:44.981473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.369 [2024-12-09 10:49:44.981545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.369 qpair failed and we were unable to recover it. 00:39:00.369 [2024-12-09 10:49:44.991067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.369 [2024-12-09 10:49:44.991264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.369 [2024-12-09 10:49:44.991328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.369 [2024-12-09 10:49:44.991365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.369 [2024-12-09 10:49:44.991397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.369 [2024-12-09 10:49:44.991469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.369 qpair failed and we were unable to recover it. 00:39:00.369 [2024-12-09 10:49:45.001141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.369 [2024-12-09 10:49:45.001349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.369 [2024-12-09 10:49:45.001412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.369 [2024-12-09 10:49:45.001448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.369 [2024-12-09 10:49:45.001480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.369 [2024-12-09 10:49:45.001554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.369 qpair failed and we were unable to recover it. 00:39:00.369 [2024-12-09 10:49:45.011150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.369 [2024-12-09 10:49:45.011350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.369 [2024-12-09 10:49:45.011414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.369 [2024-12-09 10:49:45.011450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.369 [2024-12-09 10:49:45.011480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.369 [2024-12-09 10:49:45.011554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.369 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.021198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.021407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.021470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.021506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.021537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.021611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.031219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.031427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.031490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.031526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.031558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.031631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.041262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.041464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.041540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.041579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.041611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.041683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.051246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.051453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.051515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.051553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.051585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.051655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.061365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.061587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.061646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.061680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.061711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.061802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.640 [2024-12-09 10:49:45.071338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.640 [2024-12-09 10:49:45.071553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.640 [2024-12-09 10:49:45.071617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.640 [2024-12-09 10:49:45.071652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.640 [2024-12-09 10:49:45.071682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.640 [2024-12-09 10:49:45.071769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.640 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.081401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.081616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.081679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.081714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.081780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.081856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.091421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.091617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.091681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.091717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.091769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.091842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.101507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.101741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.101806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.101842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.101872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.101944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.111501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.111691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.111769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.111808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.111839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.111910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.121501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.121740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.121804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.121841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.121873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.121946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.131574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.131795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.131859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.131895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.131926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.132000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.141863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.142119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.641 [2024-12-09 10:49:45.142182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.641 [2024-12-09 10:49:45.142216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.641 [2024-12-09 10:49:45.142247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.641 [2024-12-09 10:49:45.142320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.641 qpair failed and we were unable to recover it. 00:39:00.641 [2024-12-09 10:49:45.151587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.641 [2024-12-09 10:49:45.151789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.151853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.151889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.151918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.151990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.161613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.161820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.161885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.161921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.161952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.162025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.171665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.171881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.171956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.171996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.172027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.172100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.181712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.181948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.182011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.182047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.182078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.182151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.191766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.191967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.192030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.192065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.192096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.192168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.201765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.201964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.202026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.202063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.202093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.202166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.211805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.212004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.642 [2024-12-09 10:49:45.212066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.642 [2024-12-09 10:49:45.212102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.642 [2024-12-09 10:49:45.212147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.642 [2024-12-09 10:49:45.212222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.642 qpair failed and we were unable to recover it. 00:39:00.642 [2024-12-09 10:49:45.221916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.642 [2024-12-09 10:49:45.222132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.222195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.222231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.222262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.222334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.231892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.232091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.232154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.232190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.232220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.232292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.241886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.242089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.242154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.242190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.242220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.242291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.251916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.252118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.252182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.252220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.252251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.252322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.262036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.262270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.262333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.262369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.262401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.262474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.272068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.272292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.272354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.272390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.272420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.272493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.282038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.643 [2024-12-09 10:49:45.282249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.643 [2024-12-09 10:49:45.282313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.643 [2024-12-09 10:49:45.282349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.643 [2024-12-09 10:49:45.282379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.643 [2024-12-09 10:49:45.282452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.643 qpair failed and we were unable to recover it. 00:39:00.643 [2024-12-09 10:49:45.292101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.292306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.292371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.292407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.292439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.292511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.302145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.302388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.302471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.302509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.302540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.302613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.312185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.312397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.312460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.312496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.312527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.312599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.322158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.322363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.322426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.322462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.322494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.322566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.332176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.332402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.332464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.332500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.332533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.332604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.342264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.906 [2024-12-09 10:49:45.342495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.906 [2024-12-09 10:49:45.342559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.906 [2024-12-09 10:49:45.342596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.906 [2024-12-09 10:49:45.342641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.906 [2024-12-09 10:49:45.342716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.906 qpair failed and we were unable to recover it. 00:39:00.906 [2024-12-09 10:49:45.352284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.352496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.352558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.352595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.352626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.352698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.362294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.362491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.362555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.362591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.362623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.362694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.372332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.372554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.372616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.372652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.372685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.372772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.382426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.382696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.382774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.382814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.382846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.382918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.392407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.392594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.392656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.392691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.392741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.392820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.402429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.402627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.402692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.402744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.402797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.402876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.412463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.412674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.412760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.412801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.412833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.412906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.422567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.422803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.422866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.422902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.422931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xefa5d0 00:39:00.907 [2024-12-09 10:49:45.423004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.432774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.432982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.433078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.433120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.433152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7f78000b90 00:39:00.907 [2024-12-09 10:49:45.433233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.442618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:00.907 [2024-12-09 10:49:45.442820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:00.907 [2024-12-09 10:49:45.442887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:00.907 [2024-12-09 10:49:45.442923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:00.907 [2024-12-09 10:49:45.442954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7f78000b90 00:39:00.907 [2024-12-09 10:49:45.443032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:00.907 qpair failed and we were unable to recover it. 00:39:00.907 [2024-12-09 10:49:45.443247] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:39:00.907 A controller has encountered a failure and is being reset. 00:39:00.907 Controller properly reset. 00:39:01.166 Initializing NVMe Controllers 00:39:01.166 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:01.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:01.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:01.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:01.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:01.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:01.166 Initialization complete. Launching workers. 00:39:01.166 Starting thread on core 1 00:39:01.166 Starting thread on core 2 00:39:01.166 Starting thread on core 3 00:39:01.166 Starting thread on core 0 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:01.166 00:39:01.166 real 0m11.375s 00:39:01.166 user 0m19.960s 00:39:01.166 sys 0m5.596s 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:01.166 ************************************ 00:39:01.166 END TEST nvmf_target_disconnect_tc2 00:39:01.166 ************************************ 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.166 rmmod nvme_tcp 00:39:01.166 rmmod nvme_fabrics 00:39:01.166 rmmod nvme_keyring 00:39:01.166 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2246108 ']' 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2246108 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2246108 ']' 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2246108 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246108 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246108' 00:39:01.167 killing process with pid 2246108 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2246108 00:39:01.167 10:49:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2246108 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.427 10:49:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.966 10:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.966 00:39:03.966 real 0m17.597s 00:39:03.966 user 0m47.358s 00:39:03.966 sys 0m8.746s 00:39:03.966 10:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.966 10:49:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:03.966 ************************************ 00:39:03.966 END TEST nvmf_target_disconnect 00:39:03.967 ************************************ 00:39:03.967 10:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:03.967 00:39:03.967 real 6m35.096s 00:39:03.967 user 13m46.219s 00:39:03.967 sys 1m41.609s 00:39:03.967 10:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.967 10:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.967 ************************************ 00:39:03.967 END TEST nvmf_host 00:39:03.967 ************************************ 00:39:03.967 10:49:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:39:03.967 10:49:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:39:03.967 10:49:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:03.967 10:49:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:03.967 10:49:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.967 10:49:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:03.967 ************************************ 00:39:03.967 START TEST nvmf_target_core_interrupt_mode 00:39:03.967 ************************************ 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:03.967 * Looking for test storage... 00:39:03.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:03.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.967 --rc genhtml_branch_coverage=1 00:39:03.967 --rc genhtml_function_coverage=1 00:39:03.967 --rc genhtml_legend=1 00:39:03.967 --rc geninfo_all_blocks=1 00:39:03.967 --rc geninfo_unexecuted_blocks=1 00:39:03.967 00:39:03.967 ' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:03.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.967 --rc genhtml_branch_coverage=1 00:39:03.967 --rc genhtml_function_coverage=1 00:39:03.967 --rc genhtml_legend=1 00:39:03.967 --rc geninfo_all_blocks=1 00:39:03.967 --rc geninfo_unexecuted_blocks=1 00:39:03.967 00:39:03.967 ' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:03.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.967 --rc genhtml_branch_coverage=1 00:39:03.967 --rc genhtml_function_coverage=1 00:39:03.967 --rc genhtml_legend=1 00:39:03.967 --rc geninfo_all_blocks=1 00:39:03.967 --rc geninfo_unexecuted_blocks=1 00:39:03.967 00:39:03.967 ' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:03.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.967 --rc genhtml_branch_coverage=1 00:39:03.967 --rc genhtml_function_coverage=1 00:39:03.967 --rc genhtml_legend=1 00:39:03.967 --rc geninfo_all_blocks=1 00:39:03.967 --rc geninfo_unexecuted_blocks=1 00:39:03.967 00:39:03.967 ' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:39:03.967 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:03.968 ************************************ 00:39:03.968 START TEST nvmf_abort 00:39:03.968 ************************************ 00:39:03.968 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:04.227 * Looking for test storage... 00:39:04.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:04.227 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:04.227 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:39:04.227 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:04.487 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.488 --rc genhtml_branch_coverage=1 00:39:04.488 --rc genhtml_function_coverage=1 00:39:04.488 --rc genhtml_legend=1 00:39:04.488 --rc geninfo_all_blocks=1 00:39:04.488 --rc geninfo_unexecuted_blocks=1 00:39:04.488 00:39:04.488 ' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.488 --rc genhtml_branch_coverage=1 00:39:04.488 --rc genhtml_function_coverage=1 00:39:04.488 --rc genhtml_legend=1 00:39:04.488 --rc geninfo_all_blocks=1 00:39:04.488 --rc geninfo_unexecuted_blocks=1 00:39:04.488 00:39:04.488 ' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.488 --rc genhtml_branch_coverage=1 00:39:04.488 --rc genhtml_function_coverage=1 00:39:04.488 --rc genhtml_legend=1 00:39:04.488 --rc geninfo_all_blocks=1 00:39:04.488 --rc geninfo_unexecuted_blocks=1 00:39:04.488 00:39:04.488 ' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.488 --rc genhtml_branch_coverage=1 00:39:04.488 --rc genhtml_function_coverage=1 00:39:04.488 --rc genhtml_legend=1 00:39:04.488 --rc geninfo_all_blocks=1 00:39:04.488 --rc geninfo_unexecuted_blocks=1 00:39:04.488 00:39:04.488 ' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:04.488 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:04.489 10:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:07.779 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:07.780 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:07.780 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:07.780 Found net devices under 0000:84:00.0: cvl_0_0 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:07.780 Found net devices under 0000:84:00.1: cvl_0_1 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:07.780 10:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:07.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:07.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:39:07.780 00:39:07.780 --- 10.0.0.2 ping statistics --- 00:39:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.780 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:07.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:07.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:39:07.780 00:39:07.780 --- 10.0.0.1 ping statistics --- 00:39:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.780 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:07.780 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2249062 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2249062 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2249062 ']' 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.781 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:07.781 [2024-12-09 10:49:52.319027] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:07.781 [2024-12-09 10:49:52.321791] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:39:07.781 [2024-12-09 10:49:52.321929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.040 [2024-12-09 10:49:52.513325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:08.040 [2024-12-09 10:49:52.634378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.040 [2024-12-09 10:49:52.634487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.040 [2024-12-09 10:49:52.634525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.040 [2024-12-09 10:49:52.634557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.040 [2024-12-09 10:49:52.634583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.040 [2024-12-09 10:49:52.637853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.040 [2024-12-09 10:49:52.637953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.040 [2024-12-09 10:49:52.637958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.299 [2024-12-09 10:49:52.814841] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.299 [2024-12-09 10:49:52.815064] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:08.299 [2024-12-09 10:49:52.815111] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.299 [2024-12-09 10:49:52.815403] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:08.299 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.299 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:39:08.299 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.299 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.299 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 [2024-12-09 10:49:52.970855] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 Malloc0 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 Delay0 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 [2024-12-09 10:49:53.071613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.559 10:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:08.559 [2024-12-09 10:49:53.147334] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:11.097 Initializing NVMe Controllers 00:39:11.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:11.097 controller IO queue size 128 less than required 00:39:11.097 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:11.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:11.097 Initialization complete. Launching workers. 00:39:11.097 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28776 00:39:11.097 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28833, failed to submit 66 00:39:11.097 success 28776, unsuccessful 57, failed 0 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:11.097 rmmod nvme_tcp 00:39:11.097 rmmod nvme_fabrics 00:39:11.097 rmmod nvme_keyring 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2249062 ']' 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2249062 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2249062 ']' 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2249062 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249062 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249062' 00:39:11.097 killing process with pid 2249062 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2249062 00:39:11.097 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2249062 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.369 10:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:13.917 00:39:13.917 real 0m9.346s 00:39:13.917 user 0m10.569s 00:39:13.917 sys 0m4.243s 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 ************************************ 00:39:13.917 END TEST nvmf_abort 00:39:13.917 ************************************ 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:13.917 10:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:13.917 ************************************ 00:39:13.917 START TEST nvmf_ns_hotplug_stress 00:39:13.917 ************************************ 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:13.917 * Looking for test storage... 00:39:13.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:13.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.917 --rc genhtml_branch_coverage=1 00:39:13.917 --rc genhtml_function_coverage=1 00:39:13.917 --rc genhtml_legend=1 00:39:13.917 --rc geninfo_all_blocks=1 00:39:13.917 --rc geninfo_unexecuted_blocks=1 00:39:13.917 00:39:13.917 ' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:13.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.917 --rc genhtml_branch_coverage=1 00:39:13.917 --rc genhtml_function_coverage=1 00:39:13.917 --rc genhtml_legend=1 00:39:13.917 --rc geninfo_all_blocks=1 00:39:13.917 --rc geninfo_unexecuted_blocks=1 00:39:13.917 00:39:13.917 ' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:13.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.917 --rc genhtml_branch_coverage=1 00:39:13.917 --rc genhtml_function_coverage=1 00:39:13.917 --rc genhtml_legend=1 00:39:13.917 --rc geninfo_all_blocks=1 00:39:13.917 --rc geninfo_unexecuted_blocks=1 00:39:13.917 00:39:13.917 ' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:13.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:13.917 --rc genhtml_branch_coverage=1 00:39:13.917 --rc genhtml_function_coverage=1 00:39:13.917 --rc genhtml_legend=1 00:39:13.917 --rc geninfo_all_blocks=1 00:39:13.917 --rc geninfo_unexecuted_blocks=1 00:39:13.917 00:39:13.917 ' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:13.917 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:13.918 10:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:17.217 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:17.217 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:17.217 Found net devices under 0000:84:00.0: cvl_0_0 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:17.217 Found net devices under 0000:84:00.1: cvl_0_1 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:17.217 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:39:17.218 00:39:17.218 --- 10.0.0.2 ping statistics --- 00:39:17.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.218 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:39:17.218 00:39:17.218 --- 10.0.0.1 ping statistics --- 00:39:17.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.218 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2251460 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2251460 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2251460 ']' 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:17.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.218 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:17.218 [2024-12-09 10:50:01.653859] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:17.218 [2024-12-09 10:50:01.656639] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:39:17.218 [2024-12-09 10:50:01.656829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:17.218 [2024-12-09 10:50:01.844603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:17.480 [2024-12-09 10:50:01.965191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.480 [2024-12-09 10:50:01.965303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.480 [2024-12-09 10:50:01.965341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.480 [2024-12-09 10:50:01.965371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.480 [2024-12-09 10:50:01.965398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.480 [2024-12-09 10:50:01.968652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:17.480 [2024-12-09 10:50:01.968765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:17.480 [2024-12-09 10:50:01.968771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.739 [2024-12-09 10:50:02.145877] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:17.739 [2024-12-09 10:50:02.146074] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:17.739 [2024-12-09 10:50:02.146122] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:17.739 [2024-12-09 10:50:02.146410] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:17.739 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:17.999 [2024-12-09 10:50:02.593994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.999 10:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:18.569 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:18.828 [2024-12-09 10:50:03.450457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:19.088 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:19.346 10:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:19.606 Malloc0 00:39:19.606 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:20.543 Delay0 00:39:20.543 10:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.802 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:21.369 NULL1 00:39:21.369 10:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:21.938 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2252036 00:39:21.939 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:21.939 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:21.939 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.321 Read completed with error (sct=0, sc=11) 00:39:23.321 10:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:23.583 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:23.583 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:24.155 true 00:39:24.155 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:24.155 10:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.540 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:25.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.058 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:26.058 10:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:26.627 true 00:39:26.627 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:26.627 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.195 10:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:27.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:27.765 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:27.765 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:28.334 true 00:39:28.334 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:28.334 10:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.273 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:29.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:30.052 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:30.052 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:30.312 true 00:39:30.312 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:30.312 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.218 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:32.477 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:32.477 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:32.735 true 00:39:32.993 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:32.993 10:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.929 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:34.446 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:34.447 10:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:35.016 true 00:39:35.016 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:35.016 10:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.425 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:37.360 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:37.360 10:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:37.618 true 00:39:37.618 10:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:37.618 10:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.555 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:38.846 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:38.846 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:39.414 true 00:39:39.414 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:39.414 10:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:40.808 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:40.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:40.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:40.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:41.067 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:41.067 10:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:41.637 true 00:39:41.637 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:41.637 10:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:43.286 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:43.286 10:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:43.853 true 00:39:43.853 10:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:43.853 10:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:44.421 10:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:44.681 10:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:44.681 10:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:44.939 true 00:39:44.939 10:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:44.939 10:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 10:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:45.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:46.135 10:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:46.135 10:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:46.702 true 00:39:46.702 10:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:46.702 10:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:46.960 10:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.223 10:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:47.223 10:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:47.791 true 00:39:47.791 10:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:47.791 10:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:49.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.165 10:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:49.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:49.943 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:49.943 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:50.201 true 00:39:50.460 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:50.460 10:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:50.717 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:50.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:50.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:51.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:51.233 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:51.233 10:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:51.492 true 00:39:51.492 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:51.492 10:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:53.394 Initializing NVMe Controllers 00:39:53.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:53.394 Controller IO queue size 128, less than required. 00:39:53.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:53.394 Controller IO queue size 128, less than required. 00:39:53.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:53.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:53.394 Initialization complete. Launching workers. 00:39:53.394 ======================================================== 00:39:53.394 Latency(us) 00:39:53.394 Device Information : IOPS MiB/s Average min max 00:39:53.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3781.12 1.85 26791.27 2962.06 2014999.54 00:39:53.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15842.14 7.74 8079.91 2233.99 446840.68 00:39:53.394 ======================================================== 00:39:53.394 Total : 19623.26 9.58 11685.32 2233.99 2014999.54 00:39:53.394 00:39:53.394 10:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:53.653 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:53.653 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:53.911 true 00:39:53.911 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2252036 00:39:53.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2252036) - No such process 00:39:53.911 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2252036 00:39:53.911 10:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:54.479 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:55.416 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:55.416 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:55.416 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:55.416 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:55.416 10:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:55.983 null0 00:39:55.983 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:55.983 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:55.983 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:56.242 null1 00:39:56.242 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:56.242 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:56.242 10:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:56.809 null2 00:39:56.809 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:56.809 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:56.809 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:57.376 null3 00:39:57.376 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:57.376 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:57.376 10:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:57.943 null4 00:39:57.943 10:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:57.943 10:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:57.943 10:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:58.512 null5 00:39:58.512 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:58.512 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:58.512 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:59.079 null6 00:39:59.079 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:59.079 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:59.079 10:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:59.337 null7 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.596 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2256260 2256262 2256266 2256268 2256270 2256273 2256275 2256279 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:59.597 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:59.857 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.117 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:00.377 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.377 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.378 10:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:00.378 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:00.378 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:00.378 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:00.636 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:00.636 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:00.636 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:00.636 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.636 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.896 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:00.897 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:01.156 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:01.416 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:01.416 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.416 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:01.416 10:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:01.675 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:01.933 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:01.933 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:01.933 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:01.934 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:01.934 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.192 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:02.451 10:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:02.451 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.710 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:02.969 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.227 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.485 10:50:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:03.485 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.485 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.485 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:03.743 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.001 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.002 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:04.261 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:04.519 10:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.520 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:04.778 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.778 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.778 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:04.778 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:04.778 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:04.779 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.038 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.297 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:05.556 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.556 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.556 10:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:05.556 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:05.814 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.073 10:50:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.639 rmmod nvme_tcp 00:40:06.639 rmmod nvme_fabrics 00:40:06.639 rmmod nvme_keyring 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2251460 ']' 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2251460 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2251460 ']' 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2251460 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2251460 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2251460' 00:40:06.639 killing process with pid 2251460 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2251460 00:40:06.639 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2251460 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:07.212 10:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.123 00:40:09.123 real 0m55.589s 00:40:09.123 user 3m43.935s 00:40:09.123 sys 0m26.393s 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:09.123 ************************************ 00:40:09.123 END TEST nvmf_ns_hotplug_stress 00:40:09.123 ************************************ 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:09.123 ************************************ 00:40:09.123 START TEST nvmf_delete_subsystem 00:40:09.123 ************************************ 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:09.123 * Looking for test storage... 00:40:09.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:40:09.123 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.384 --rc genhtml_branch_coverage=1 00:40:09.384 --rc genhtml_function_coverage=1 00:40:09.384 --rc genhtml_legend=1 00:40:09.384 --rc geninfo_all_blocks=1 00:40:09.384 --rc geninfo_unexecuted_blocks=1 00:40:09.384 00:40:09.384 ' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.384 --rc genhtml_branch_coverage=1 00:40:09.384 --rc genhtml_function_coverage=1 00:40:09.384 --rc genhtml_legend=1 00:40:09.384 --rc geninfo_all_blocks=1 00:40:09.384 --rc geninfo_unexecuted_blocks=1 00:40:09.384 00:40:09.384 ' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.384 --rc genhtml_branch_coverage=1 00:40:09.384 --rc genhtml_function_coverage=1 00:40:09.384 --rc genhtml_legend=1 00:40:09.384 --rc geninfo_all_blocks=1 00:40:09.384 --rc geninfo_unexecuted_blocks=1 00:40:09.384 00:40:09.384 ' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:09.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.384 --rc genhtml_branch_coverage=1 00:40:09.384 --rc genhtml_function_coverage=1 00:40:09.384 --rc genhtml_legend=1 00:40:09.384 --rc geninfo_all_blocks=1 00:40:09.384 --rc geninfo_unexecuted_blocks=1 00:40:09.384 00:40:09.384 ' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.384 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:09.385 10:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:12.680 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:12.681 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:12.681 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:12.681 Found net devices under 0000:84:00.0: cvl_0_0 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:12.681 Found net devices under 0000:84:00.1: cvl_0_1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:12.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:12.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:40:12.681 00:40:12.681 --- 10.0.0.2 ping statistics --- 00:40:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.681 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:12.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:12.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:40:12.681 00:40:12.681 --- 10.0.0.1 ping statistics --- 00:40:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.681 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:12.681 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2259308 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2259308 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2259308 ']' 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.682 10:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:12.682 [2024-12-09 10:50:57.019951] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:12.682 [2024-12-09 10:50:57.021291] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:40:12.682 [2024-12-09 10:50:57.021358] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:12.682 [2024-12-09 10:50:57.160694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:12.682 [2024-12-09 10:50:57.280141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:12.682 [2024-12-09 10:50:57.280254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:12.682 [2024-12-09 10:50:57.280290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:12.682 [2024-12-09 10:50:57.280323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:12.682 [2024-12-09 10:50:57.280348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:12.682 [2024-12-09 10:50:57.286769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.682 [2024-12-09 10:50:57.286800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.942 [2024-12-09 10:50:57.467767] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:12.942 [2024-12-09 10:50:57.467787] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:12.942 [2024-12-09 10:50:57.468396] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.882 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 [2024-12-09 10:50:58.540070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 [2024-12-09 10:50:58.568680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 NULL1 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 Delay0 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2259512 00:40:14.143 10:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:14.143 [2024-12-09 10:50:58.689535] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:16.167 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:16.167 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.167 10:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 [2024-12-09 10:51:00.789384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2d680 is same with the state(6) to be set 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Write completed with error (sct=0, sc=8) 00:40:16.167 starting I/O failed: -6 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.167 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 starting I/O failed: -6 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 [2024-12-09 10:51:00.790123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ebc000c40 is same with the state(6) to be set 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Read completed with error (sct=0, sc=8) 00:40:16.168 Write completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Write completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:16.169 Read completed with error (sct=0, sc=8) 00:40:17.106 [2024-12-09 10:51:01.752739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2e9b0 is same with the state(6) to be set 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 [2024-12-09 10:51:01.787117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2d2c0 is same with the state(6) to be set 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Read completed with error (sct=0, sc=8) 00:40:17.366 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 [2024-12-09 10:51:01.792264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ebc00d020 is same with the state(6) to be set 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 [2024-12-09 10:51:01.792402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ebc00d680 is same with the state(6) to be set 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Write completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 Read completed with error (sct=0, sc=8) 00:40:17.367 [2024-12-09 10:51:01.793385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2d860 is same with the state(6) to be set 00:40:17.367 Initializing NVMe Controllers 00:40:17.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:17.367 Controller IO queue size 128, less than required. 00:40:17.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:17.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:17.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:17.367 Initialization complete. Launching workers. 00:40:17.367 ======================================================== 00:40:17.367 Latency(us) 00:40:17.367 Device Information : IOPS MiB/s Average min max 00:40:17.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.19 0.08 924452.58 590.73 1013868.50 00:40:17.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.19 0.08 919151.80 407.50 1013790.99 00:40:17.367 ======================================================== 00:40:17.367 Total : 317.38 0.15 921793.91 407.50 1013868.50 00:40:17.367 00:40:17.367 [2024-12-09 10:51:01.794429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e9b0 (9): Bad file descriptor 00:40:17.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:17.367 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.367 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:17.367 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2259512 00:40:17.367 10:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2259512 00:40:17.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2259512) - No such process 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2259512 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2259512 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2259512 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:17.940 [2024-12-09 10:51:02.328574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2260016 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:17.940 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:17.940 [2024-12-09 10:51:02.439200] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:18.201 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:18.201 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:18.201 10:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:18.775 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:18.775 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:18.775 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:19.346 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:19.346 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:19.346 10:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:19.915 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:19.915 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:19.915 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:20.486 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:20.486 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:20.486 10:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:20.746 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:20.746 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:20.746 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:21.005 Initializing NVMe Controllers 00:40:21.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:21.005 Controller IO queue size 128, less than required. 00:40:21.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:21.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:21.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:21.005 Initialization complete. Launching workers. 00:40:21.005 ======================================================== 00:40:21.005 Latency(us) 00:40:21.005 Device Information : IOPS MiB/s Average min max 00:40:21.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003939.71 1000207.51 1041696.91 00:40:21.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006094.44 1000217.93 1013099.73 00:40:21.005 ======================================================== 00:40:21.005 Total : 256.00 0.12 1005017.08 1000207.51 1041696.91 00:40:21.005 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2260016 00:40:21.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2260016) - No such process 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2260016 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:21.265 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:21.265 rmmod nvme_tcp 00:40:21.265 rmmod nvme_fabrics 00:40:21.265 rmmod nvme_keyring 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2259308 ']' 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2259308 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2259308 ']' 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2259308 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.525 10:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2259308 00:40:21.525 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:21.525 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:21.525 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2259308' 00:40:21.525 killing process with pid 2259308 00:40:21.525 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2259308 00:40:21.525 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2259308 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:21.785 10:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:24.333 00:40:24.333 real 0m14.767s 00:40:24.333 user 0m25.874s 00:40:24.333 sys 0m4.820s 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:24.333 ************************************ 00:40:24.333 END TEST nvmf_delete_subsystem 00:40:24.333 ************************************ 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:24.333 ************************************ 00:40:24.333 START TEST nvmf_host_management 00:40:24.333 ************************************ 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:24.333 * Looking for test storage... 00:40:24.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:24.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.333 --rc genhtml_branch_coverage=1 00:40:24.333 --rc genhtml_function_coverage=1 00:40:24.333 --rc genhtml_legend=1 00:40:24.333 --rc geninfo_all_blocks=1 00:40:24.333 --rc geninfo_unexecuted_blocks=1 00:40:24.333 00:40:24.333 ' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:24.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.333 --rc genhtml_branch_coverage=1 00:40:24.333 --rc genhtml_function_coverage=1 00:40:24.333 --rc genhtml_legend=1 00:40:24.333 --rc geninfo_all_blocks=1 00:40:24.333 --rc geninfo_unexecuted_blocks=1 00:40:24.333 00:40:24.333 ' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:24.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.333 --rc genhtml_branch_coverage=1 00:40:24.333 --rc genhtml_function_coverage=1 00:40:24.333 --rc genhtml_legend=1 00:40:24.333 --rc geninfo_all_blocks=1 00:40:24.333 --rc geninfo_unexecuted_blocks=1 00:40:24.333 00:40:24.333 ' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:24.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:24.333 --rc genhtml_branch_coverage=1 00:40:24.333 --rc genhtml_function_coverage=1 00:40:24.333 --rc genhtml_legend=1 00:40:24.333 --rc geninfo_all_blocks=1 00:40:24.333 --rc geninfo_unexecuted_blocks=1 00:40:24.333 00:40:24.333 ' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:24.333 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:24.334 10:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:27.634 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:27.634 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:27.634 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:27.635 Found net devices under 0000:84:00.0: cvl_0_0 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:27.635 Found net devices under 0000:84:00.1: cvl_0_1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:27.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:27.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:40:27.635 00:40:27.635 --- 10.0.0.2 ping statistics --- 00:40:27.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.635 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:27.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:27.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:40:27.635 00:40:27.635 --- 10.0.0.1 ping statistics --- 00:40:27.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.635 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2263011 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2263011 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2263011 ']' 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:27.635 10:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:27.635 [2024-12-09 10:51:12.043493] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:27.635 [2024-12-09 10:51:12.044830] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:40:27.635 [2024-12-09 10:51:12.044898] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:27.635 [2024-12-09 10:51:12.190827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:27.895 [2024-12-09 10:51:12.312816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:27.895 [2024-12-09 10:51:12.312921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:27.895 [2024-12-09 10:51:12.312962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:27.895 [2024-12-09 10:51:12.313004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:27.895 [2024-12-09 10:51:12.313031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:27.895 [2024-12-09 10:51:12.316473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:27.895 [2024-12-09 10:51:12.316573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:27.896 [2024-12-09 10:51:12.316623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:27.896 [2024-12-09 10:51:12.316627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.896 [2024-12-09 10:51:12.489476] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:27.896 [2024-12-09 10:51:12.489674] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:27.896 [2024-12-09 10:51:12.490018] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:27.896 [2024-12-09 10:51:12.491043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:27.896 [2024-12-09 10:51:12.491635] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:27.896 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:27.896 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:27.896 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:27.896 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:27.896 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.155 [2024-12-09 10:51:12.573554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.155 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.155 Malloc0 00:40:28.155 [2024-12-09 10:51:12.669776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2263174 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2263174 /var/tmp/bdevperf.sock 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2263174 ']' 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:28.156 { 00:40:28.156 "params": { 00:40:28.156 "name": "Nvme$subsystem", 00:40:28.156 "trtype": "$TEST_TRANSPORT", 00:40:28.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:28.156 "adrfam": "ipv4", 00:40:28.156 "trsvcid": "$NVMF_PORT", 00:40:28.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:28.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:28.156 "hdgst": ${hdgst:-false}, 00:40:28.156 "ddgst": ${ddgst:-false} 00:40:28.156 }, 00:40:28.156 "method": "bdev_nvme_attach_controller" 00:40:28.156 } 00:40:28.156 EOF 00:40:28.156 )") 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:28.156 10:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:28.156 "params": { 00:40:28.156 "name": "Nvme0", 00:40:28.156 "trtype": "tcp", 00:40:28.156 "traddr": "10.0.0.2", 00:40:28.156 "adrfam": "ipv4", 00:40:28.156 "trsvcid": "4420", 00:40:28.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:28.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:28.156 "hdgst": false, 00:40:28.156 "ddgst": false 00:40:28.156 }, 00:40:28.156 "method": "bdev_nvme_attach_controller" 00:40:28.156 }' 00:40:28.156 [2024-12-09 10:51:12.757580] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:40:28.156 [2024-12-09 10:51:12.757670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263174 ] 00:40:28.416 [2024-12-09 10:51:12.835350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.416 [2024-12-09 10:51:12.900189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.675 Running I/O for 10 seconds... 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:28.675 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.934 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:28.934 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:28.934 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:29.195 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.196 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:29.196 [2024-12-09 10:51:13.677908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.677975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.196 [2024-12-09 10:51:13.678895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.196 [2024-12-09 10:51:13.678911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.678925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.678941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.678955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.678971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.678984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.679916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:29.197 [2024-12-09 10:51:13.679930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.680091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:29.197 [2024-12-09 10:51:13.680113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.680128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:29.197 [2024-12-09 10:51:13.680141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.680155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:29.197 [2024-12-09 10:51:13.680169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.680183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:29.197 [2024-12-09 10:51:13.680207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:29.197 [2024-12-09 10:51:13.680219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7cc60 is same with the state(6) to be set 00:40:29.197 [2024-12-09 10:51:13.681386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:29.197 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.197 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:29.197 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.197 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:29.198 task offset: 87040 on job bdev=Nvme0n1 fails 00:40:29.198 00:40:29.198 Latency(us) 00:40:29.198 [2024-12-09T09:51:13.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.198 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:29.198 Job: Nvme0n1 ended in about 0.43 seconds with error 00:40:29.198 Verification LBA range: start 0x0 length 0x400 00:40:29.198 Nvme0n1 : 0.43 1489.38 93.09 148.94 0.00 38020.83 2949.12 33981.63 00:40:29.198 [2024-12-09T09:51:13.852Z] =================================================================================================================== 00:40:29.198 [2024-12-09T09:51:13.852Z] Total : 1489.38 93.09 148.94 0.00 38020.83 2949.12 33981.63 00:40:29.198 [2024-12-09 10:51:13.684288] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:29.198 [2024-12-09 10:51:13.684323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7cc60 (9): Bad file descriptor 00:40:29.198 [2024-12-09 10:51:13.688485] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:29.198 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.198 10:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2263174 00:40:30.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2263174) - No such process 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:30.134 { 00:40:30.134 "params": { 00:40:30.134 "name": "Nvme$subsystem", 00:40:30.134 "trtype": "$TEST_TRANSPORT", 00:40:30.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:30.134 "adrfam": "ipv4", 00:40:30.134 "trsvcid": "$NVMF_PORT", 00:40:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:30.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:30.134 "hdgst": ${hdgst:-false}, 00:40:30.134 "ddgst": ${ddgst:-false} 00:40:30.134 }, 00:40:30.134 "method": "bdev_nvme_attach_controller" 00:40:30.134 } 00:40:30.134 EOF 00:40:30.134 )") 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:30.134 10:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:30.134 "params": { 00:40:30.134 "name": "Nvme0", 00:40:30.134 "trtype": "tcp", 00:40:30.134 "traddr": "10.0.0.2", 00:40:30.134 "adrfam": "ipv4", 00:40:30.134 "trsvcid": "4420", 00:40:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:30.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:30.134 "hdgst": false, 00:40:30.134 "ddgst": false 00:40:30.134 }, 00:40:30.134 "method": "bdev_nvme_attach_controller" 00:40:30.134 }' 00:40:30.134 [2024-12-09 10:51:14.740734] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:40:30.134 [2024-12-09 10:51:14.740828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263336 ] 00:40:30.392 [2024-12-09 10:51:14.813210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.392 [2024-12-09 10:51:14.871648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.649 Running I/O for 1 seconds... 00:40:31.585 1600.00 IOPS, 100.00 MiB/s 00:40:31.585 Latency(us) 00:40:31.585 [2024-12-09T09:51:16.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:31.585 Verification LBA range: start 0x0 length 0x400 00:40:31.585 Nvme0n1 : 1.03 1613.33 100.83 0.00 0.00 39041.68 5631.24 34175.81 00:40:31.585 [2024-12-09T09:51:16.239Z] =================================================================================================================== 00:40:31.585 [2024-12-09T09:51:16.239Z] Total : 1613.33 100.83 0.00 0.00 39041.68 5631.24 34175.81 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.843 rmmod nvme_tcp 00:40:31.843 rmmod nvme_fabrics 00:40:31.843 rmmod nvme_keyring 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2263011 ']' 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2263011 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2263011 ']' 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2263011 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:31.843 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263011 00:40:32.101 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:32.101 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:32.102 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263011' 00:40:32.102 killing process with pid 2263011 00:40:32.102 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2263011 00:40:32.102 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2263011 00:40:32.359 [2024-12-09 10:51:16.850256] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:32.359 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.360 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.360 10:51:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.901 10:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.901 10:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:34.901 00:40:34.901 real 0m10.451s 00:40:34.901 user 0m19.005s 00:40:34.901 sys 0m4.811s 00:40:34.901 10:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.901 10:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:34.901 ************************************ 00:40:34.901 END TEST nvmf_host_management 00:40:34.901 ************************************ 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:34.901 ************************************ 00:40:34.901 START TEST nvmf_lvol 00:40:34.901 ************************************ 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:34.901 * Looking for test storage... 00:40:34.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.901 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:34.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.902 --rc genhtml_branch_coverage=1 00:40:34.902 --rc genhtml_function_coverage=1 00:40:34.902 --rc genhtml_legend=1 00:40:34.902 --rc geninfo_all_blocks=1 00:40:34.902 --rc geninfo_unexecuted_blocks=1 00:40:34.902 00:40:34.902 ' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:34.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.902 --rc genhtml_branch_coverage=1 00:40:34.902 --rc genhtml_function_coverage=1 00:40:34.902 --rc genhtml_legend=1 00:40:34.902 --rc geninfo_all_blocks=1 00:40:34.902 --rc geninfo_unexecuted_blocks=1 00:40:34.902 00:40:34.902 ' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:34.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.902 --rc genhtml_branch_coverage=1 00:40:34.902 --rc genhtml_function_coverage=1 00:40:34.902 --rc genhtml_legend=1 00:40:34.902 --rc geninfo_all_blocks=1 00:40:34.902 --rc geninfo_unexecuted_blocks=1 00:40:34.902 00:40:34.902 ' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:34.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.902 --rc genhtml_branch_coverage=1 00:40:34.902 --rc genhtml_function_coverage=1 00:40:34.902 --rc genhtml_legend=1 00:40:34.902 --rc geninfo_all_blocks=1 00:40:34.902 --rc geninfo_unexecuted_blocks=1 00:40:34.902 00:40:34.902 ' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.902 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.903 10:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:38.196 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:38.196 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:38.196 Found net devices under 0000:84:00.0: cvl_0_0 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:38.196 Found net devices under 0000:84:00.1: cvl_0_1 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:38.196 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:38.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:40:38.197 00:40:38.197 --- 10.0.0.2 ping statistics --- 00:40:38.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.197 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:40:38.197 00:40:38.197 --- 10.0.0.1 ping statistics --- 00:40:38.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.197 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2265673 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2265673 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2265673 ']' 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:38.197 [2024-12-09 10:51:22.514686] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:38.197 [2024-12-09 10:51:22.516253] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:40:38.197 [2024-12-09 10:51:22.516336] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:38.197 [2024-12-09 10:51:22.616199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:38.197 [2024-12-09 10:51:22.690049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:38.197 [2024-12-09 10:51:22.690141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:38.197 [2024-12-09 10:51:22.690161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:38.197 [2024-12-09 10:51:22.690178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:38.197 [2024-12-09 10:51:22.690207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:38.197 [2024-12-09 10:51:22.692114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.197 [2024-12-09 10:51:22.692175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:38.197 [2024-12-09 10:51:22.692180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.197 [2024-12-09 10:51:22.808587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:38.197 [2024-12-09 10:51:22.808860] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:38.197 [2024-12-09 10:51:22.808887] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:38.197 [2024-12-09 10:51:22.809159] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:38.197 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:38.475 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:38.475 10:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:38.734 [2024-12-09 10:51:23.181027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.734 10:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:39.302 10:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:39.302 10:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:40.240 10:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:40.240 10:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:40.808 10:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:41.379 10:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f197deab-8bb8-4def-85c4-4f2e94ccaef1 00:40:41.379 10:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f197deab-8bb8-4def-85c4-4f2e94ccaef1 lvol 20 00:40:42.321 10:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d63931e0-4537-403c-8fc7-cb9c794c2a19 00:40:42.321 10:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:42.890 10:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d63931e0-4537-403c-8fc7-cb9c794c2a19 00:40:43.461 10:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:43.720 [2024-12-09 10:51:28.269327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.720 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:44.289 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2266359 00:40:44.289 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:44.289 10:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:45.228 10:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d63931e0-4537-403c-8fc7-cb9c794c2a19 MY_SNAPSHOT 00:40:45.795 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=aa8e1761-74f3-44e8-812f-45d5dbb7586e 00:40:45.795 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d63931e0-4537-403c-8fc7-cb9c794c2a19 30 00:40:46.362 10:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone aa8e1761-74f3-44e8-812f-45d5dbb7586e MY_CLONE 00:40:46.620 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b7a5225f-bd84-44a1-a738-769309b7e749 00:40:46.620 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b7a5225f-bd84-44a1-a738-769309b7e749 00:40:47.558 10:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2266359 00:40:55.677 Initializing NVMe Controllers 00:40:55.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:55.677 Controller IO queue size 128, less than required. 00:40:55.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:55.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:55.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:55.677 Initialization complete. Launching workers. 00:40:55.677 ======================================================== 00:40:55.677 Latency(us) 00:40:55.677 Device Information : IOPS MiB/s Average min max 00:40:55.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10506.50 41.04 12185.84 6971.91 70353.02 00:40:55.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10403.10 40.64 12307.30 4784.21 84179.98 00:40:55.677 ======================================================== 00:40:55.677 Total : 20909.60 81.68 12246.27 4784.21 84179.98 00:40:55.677 00:40:55.677 10:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:55.677 10:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d63931e0-4537-403c-8fc7-cb9c794c2a19 00:40:55.677 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f197deab-8bb8-4def-85c4-4f2e94ccaef1 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:56.246 rmmod nvme_tcp 00:40:56.246 rmmod nvme_fabrics 00:40:56.246 rmmod nvme_keyring 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2265673 ']' 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2265673 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2265673 ']' 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2265673 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265673 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265673' 00:40:56.246 killing process with pid 2265673 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2265673 00:40:56.246 10:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2265673 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:56.816 10:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:58.730 00:40:58.730 real 0m24.217s 00:40:58.730 user 1m4.181s 00:40:58.730 sys 0m10.029s 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:58.730 ************************************ 00:40:58.730 END TEST nvmf_lvol 00:40:58.730 ************************************ 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:58.730 ************************************ 00:40:58.730 START TEST nvmf_lvs_grow 00:40:58.730 ************************************ 00:40:58.730 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:58.992 * Looking for test storage... 00:40:58.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:58.992 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.993 --rc genhtml_branch_coverage=1 00:40:58.993 --rc genhtml_function_coverage=1 00:40:58.993 --rc genhtml_legend=1 00:40:58.993 --rc geninfo_all_blocks=1 00:40:58.993 --rc geninfo_unexecuted_blocks=1 00:40:58.993 00:40:58.993 ' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.993 --rc genhtml_branch_coverage=1 00:40:58.993 --rc genhtml_function_coverage=1 00:40:58.993 --rc genhtml_legend=1 00:40:58.993 --rc geninfo_all_blocks=1 00:40:58.993 --rc geninfo_unexecuted_blocks=1 00:40:58.993 00:40:58.993 ' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.993 --rc genhtml_branch_coverage=1 00:40:58.993 --rc genhtml_function_coverage=1 00:40:58.993 --rc genhtml_legend=1 00:40:58.993 --rc geninfo_all_blocks=1 00:40:58.993 --rc geninfo_unexecuted_blocks=1 00:40:58.993 00:40:58.993 ' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.993 --rc genhtml_branch_coverage=1 00:40:58.993 --rc genhtml_function_coverage=1 00:40:58.993 --rc genhtml_legend=1 00:40:58.993 --rc geninfo_all_blocks=1 00:40:58.993 --rc geninfo_unexecuted_blocks=1 00:40:58.993 00:40:58.993 ' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:58.993 10:51:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:41:02.291 Found 0000:84:00.0 (0x8086 - 0x159b) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:41:02.291 Found 0000:84:00.1 (0x8086 - 0x159b) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:41:02.291 Found net devices under 0000:84:00.0: cvl_0_0 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:41:02.291 Found net devices under 0000:84:00.1: cvl_0_1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:02.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:41:02.291 00:41:02.291 --- 10.0.0.2 ping statistics --- 00:41:02.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.291 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:02.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:41:02.291 00:41:02.291 --- 10.0.0.1 ping statistics --- 00:41:02.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.291 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:02.291 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2269753 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2269753 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2269753 ']' 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:02.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:02.292 10:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:02.292 [2024-12-09 10:51:46.723993] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:02.292 [2024-12-09 10:51:46.725308] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:41:02.292 [2024-12-09 10:51:46.725372] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:02.292 [2024-12-09 10:51:46.863843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.551 [2024-12-09 10:51:46.981912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:02.551 [2024-12-09 10:51:46.982031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:02.551 [2024-12-09 10:51:46.982067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:02.551 [2024-12-09 10:51:46.982104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:02.551 [2024-12-09 10:51:46.982117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:02.551 [2024-12-09 10:51:46.982999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.551 [2024-12-09 10:51:47.160341] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:02.551 [2024-12-09 10:51:47.160804] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:02.812 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:03.382 [2024-12-09 10:51:47.904140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:03.382 ************************************ 00:41:03.382 START TEST lvs_grow_clean 00:41:03.382 ************************************ 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:03.382 10:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:03.382 10:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:03.382 10:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:04.323 10:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:04.323 10:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:04.892 10:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:04.892 10:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:04.893 10:51:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:05.461 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:05.461 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:05.461 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d lvol 150 00:41:05.719 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5c0077c-1691-4efc-85ab-59c40ad009ee 00:41:05.719 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.719 10:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:06.658 [2024-12-09 10:51:51.031842] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:06.658 [2024-12-09 10:51:51.032042] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:06.658 true 00:41:06.658 10:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:06.658 10:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:06.916 10:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:06.916 10:51:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:07.482 10:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5c0077c-1691-4efc-85ab-59c40ad009ee 00:41:08.420 10:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:08.989 [2024-12-09 10:51:53.440520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.989 10:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2270594 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2270594 /var/tmp/bdevperf.sock 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2270594 ']' 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:09.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:09.558 10:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:09.817 [2024-12-09 10:51:54.281712] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:41:09.817 [2024-12-09 10:51:54.281918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270594 ] 00:41:09.817 [2024-12-09 10:51:54.452389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.076 [2024-12-09 10:51:54.567035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:11.462 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:11.462 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:41:11.462 10:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:12.032 Nvme0n1 00:41:12.032 10:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:13.020 [ 00:41:13.020 { 00:41:13.020 "name": "Nvme0n1", 00:41:13.020 "aliases": [ 00:41:13.020 "d5c0077c-1691-4efc-85ab-59c40ad009ee" 00:41:13.020 ], 00:41:13.020 "product_name": "NVMe disk", 00:41:13.020 "block_size": 4096, 00:41:13.020 "num_blocks": 38912, 00:41:13.020 "uuid": "d5c0077c-1691-4efc-85ab-59c40ad009ee", 00:41:13.020 "numa_id": 1, 00:41:13.020 "assigned_rate_limits": { 00:41:13.020 "rw_ios_per_sec": 0, 00:41:13.020 "rw_mbytes_per_sec": 0, 00:41:13.020 "r_mbytes_per_sec": 0, 00:41:13.020 "w_mbytes_per_sec": 0 00:41:13.020 }, 00:41:13.020 "claimed": false, 00:41:13.020 "zoned": false, 00:41:13.020 "supported_io_types": { 00:41:13.020 "read": true, 00:41:13.020 "write": true, 00:41:13.020 "unmap": true, 00:41:13.020 "flush": true, 00:41:13.020 "reset": true, 00:41:13.020 "nvme_admin": true, 00:41:13.020 "nvme_io": true, 00:41:13.020 "nvme_io_md": false, 00:41:13.020 "write_zeroes": true, 00:41:13.020 "zcopy": false, 00:41:13.020 "get_zone_info": false, 00:41:13.020 "zone_management": false, 00:41:13.020 "zone_append": false, 00:41:13.020 "compare": true, 00:41:13.020 "compare_and_write": true, 00:41:13.020 "abort": true, 00:41:13.020 "seek_hole": false, 00:41:13.020 "seek_data": false, 00:41:13.020 "copy": true, 00:41:13.020 "nvme_iov_md": false 00:41:13.020 }, 00:41:13.020 "memory_domains": [ 00:41:13.020 { 00:41:13.020 "dma_device_id": "system", 00:41:13.020 "dma_device_type": 1 00:41:13.020 } 00:41:13.020 ], 00:41:13.020 "driver_specific": { 00:41:13.020 "nvme": [ 00:41:13.020 { 00:41:13.020 "trid": { 00:41:13.020 "trtype": "TCP", 00:41:13.020 "adrfam": "IPv4", 00:41:13.020 "traddr": "10.0.0.2", 00:41:13.020 "trsvcid": "4420", 00:41:13.020 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:13.020 }, 00:41:13.020 "ctrlr_data": { 00:41:13.020 "cntlid": 1, 00:41:13.020 "vendor_id": "0x8086", 00:41:13.020 "model_number": "SPDK bdev Controller", 00:41:13.020 "serial_number": "SPDK0", 00:41:13.021 "firmware_revision": "25.01", 00:41:13.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:13.021 "oacs": { 00:41:13.021 "security": 0, 00:41:13.021 "format": 0, 00:41:13.021 "firmware": 0, 00:41:13.021 "ns_manage": 0 00:41:13.021 }, 00:41:13.021 "multi_ctrlr": true, 00:41:13.021 "ana_reporting": false 00:41:13.021 }, 00:41:13.021 "vs": { 00:41:13.021 "nvme_version": "1.3" 00:41:13.021 }, 00:41:13.021 "ns_data": { 00:41:13.021 "id": 1, 00:41:13.021 "can_share": true 00:41:13.021 } 00:41:13.021 } 00:41:13.021 ], 00:41:13.021 "mp_policy": "active_passive" 00:41:13.021 } 00:41:13.021 } 00:41:13.021 ] 00:41:13.021 10:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2270975 00:41:13.021 10:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:13.021 10:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:13.021 Running I/O for 10 seconds... 00:41:14.459 Latency(us) 00:41:14.459 [2024-12-09T09:51:59.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.459 Nvme0n1 : 1.00 5998.00 23.43 0.00 0.00 0.00 0.00 0.00 00:41:14.459 [2024-12-09T09:51:59.113Z] =================================================================================================================== 00:41:14.459 [2024-12-09T09:51:59.113Z] Total : 5998.00 23.43 0.00 0.00 0.00 0.00 0.00 00:41:14.459 00:41:15.081 10:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:15.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.081 Nvme0n1 : 2.00 6055.00 23.65 0.00 0.00 0.00 0.00 0.00 00:41:15.081 [2024-12-09T09:51:59.735Z] =================================================================================================================== 00:41:15.081 [2024-12-09T09:51:59.735Z] Total : 6055.00 23.65 0.00 0.00 0.00 0.00 0.00 00:41:15.081 00:41:15.373 true 00:41:15.373 10:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:15.373 10:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:15.646 10:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:15.646 10:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:15.646 10:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2270975 00:41:16.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:16.262 Nvme0n1 : 3.00 6170.00 24.10 0.00 0.00 0.00 0.00 0.00 00:41:16.262 [2024-12-09T09:52:00.916Z] =================================================================================================================== 00:41:16.262 [2024-12-09T09:52:00.916Z] Total : 6170.00 24.10 0.00 0.00 0.00 0.00 0.00 00:41:16.262 00:41:17.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.278 Nvme0n1 : 4.00 6147.50 24.01 0.00 0.00 0.00 0.00 0.00 00:41:17.278 [2024-12-09T09:52:01.932Z] =================================================================================================================== 00:41:17.278 [2024-12-09T09:52:01.932Z] Total : 6147.50 24.01 0.00 0.00 0.00 0.00 0.00 00:41:17.278 00:41:18.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:18.280 Nvme0n1 : 5.00 7350.00 28.71 0.00 0.00 0.00 0.00 0.00 00:41:18.280 [2024-12-09T09:52:02.934Z] =================================================================================================================== 00:41:18.280 [2024-12-09T09:52:02.934Z] Total : 7350.00 28.71 0.00 0.00 0.00 0.00 0.00 00:41:18.280 00:41:19.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:19.280 Nvme0n1 : 6.00 7727.67 30.19 0.00 0.00 0.00 0.00 0.00 00:41:19.280 [2024-12-09T09:52:03.934Z] =================================================================================================================== 00:41:19.280 [2024-12-09T09:52:03.934Z] Total : 7727.67 30.19 0.00 0.00 0.00 0.00 0.00 00:41:19.280 00:41:20.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:20.282 Nvme0n1 : 7.00 7517.43 29.36 0.00 0.00 0.00 0.00 0.00 00:41:20.282 [2024-12-09T09:52:04.936Z] =================================================================================================================== 00:41:20.282 [2024-12-09T09:52:04.936Z] Total : 7517.43 29.36 0.00 0.00 0.00 0.00 0.00 00:41:20.282 00:41:21.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:21.217 Nvme0n1 : 8.00 7343.75 28.69 0.00 0.00 0.00 0.00 0.00 00:41:21.217 [2024-12-09T09:52:05.871Z] =================================================================================================================== 00:41:21.217 [2024-12-09T09:52:05.871Z] Total : 7343.75 28.69 0.00 0.00 0.00 0.00 0.00 00:41:21.217 00:41:22.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:22.153 Nvme0n1 : 9.00 7222.89 28.21 0.00 0.00 0.00 0.00 0.00 00:41:22.153 [2024-12-09T09:52:06.807Z] =================================================================================================================== 00:41:22.153 [2024-12-09T09:52:06.807Z] Total : 7222.89 28.21 0.00 0.00 0.00 0.00 0.00 00:41:22.153 00:41:23.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:23.088 Nvme0n1 : 10.00 7126.20 27.84 0.00 0.00 0.00 0.00 0.00 00:41:23.088 [2024-12-09T09:52:07.742Z] =================================================================================================================== 00:41:23.088 [2024-12-09T09:52:07.742Z] Total : 7126.20 27.84 0.00 0.00 0.00 0.00 0.00 00:41:23.088 00:41:23.088 00:41:23.088 Latency(us) 00:41:23.088 [2024-12-09T09:52:07.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:23.088 Nvme0n1 : 10.02 7123.40 27.83 0.00 0.00 17943.24 5582.70 28350.39 00:41:23.088 [2024-12-09T09:52:07.742Z] =================================================================================================================== 00:41:23.088 [2024-12-09T09:52:07.742Z] Total : 7123.40 27.83 0.00 0.00 17943.24 5582.70 28350.39 00:41:23.088 { 00:41:23.088 "results": [ 00:41:23.088 { 00:41:23.088 "job": "Nvme0n1", 00:41:23.088 "core_mask": "0x2", 00:41:23.088 "workload": "randwrite", 00:41:23.088 "status": "finished", 00:41:23.088 "queue_depth": 128, 00:41:23.088 "io_size": 4096, 00:41:23.088 "runtime": 10.019653, 00:41:23.088 "iops": 7123.400381230767, 00:41:23.088 "mibps": 27.825782739182685, 00:41:23.088 "io_failed": 0, 00:41:23.088 "io_timeout": 0, 00:41:23.088 "avg_latency_us": 17943.243883601146, 00:41:23.088 "min_latency_us": 5582.696296296296, 00:41:23.088 "max_latency_us": 28350.388148148148 00:41:23.088 } 00:41:23.088 ], 00:41:23.088 "core_count": 1 00:41:23.088 } 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2270594 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2270594 ']' 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2270594 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:23.089 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270594 00:41:23.348 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:23.348 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:23.348 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270594' 00:41:23.348 killing process with pid 2270594 00:41:23.348 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2270594 00:41:23.348 Received shutdown signal, test time was about 10.000000 seconds 00:41:23.348 00:41:23.348 Latency(us) 00:41:23.348 [2024-12-09T09:52:08.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.348 [2024-12-09T09:52:08.002Z] =================================================================================================================== 00:41:23.348 [2024-12-09T09:52:08.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:23.348 10:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2270594 00:41:23.606 10:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:24.172 10:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:25.136 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:25.136 10:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:25.702 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:25.702 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:25.702 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:26.270 [2024-12-09 10:52:10.819945] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:26.270 10:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:26.835 request: 00:41:26.835 { 00:41:26.835 "uuid": "9b322a29-64b0-41e4-ad47-6fd6bf81e27d", 00:41:26.835 "method": "bdev_lvol_get_lvstores", 00:41:26.835 "req_id": 1 00:41:26.835 } 00:41:26.835 Got JSON-RPC error response 00:41:26.835 response: 00:41:26.835 { 00:41:26.835 "code": -19, 00:41:26.835 "message": "No such device" 00:41:26.835 } 00:41:26.835 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:41:26.835 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:26.835 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:26.835 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:26.835 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:27.400 aio_bdev 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5c0077c-1691-4efc-85ab-59c40ad009ee 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d5c0077c-1691-4efc-85ab-59c40ad009ee 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:27.400 10:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:28.335 10:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5c0077c-1691-4efc-85ab-59c40ad009ee -t 2000 00:41:28.900 [ 00:41:28.900 { 00:41:28.900 "name": "d5c0077c-1691-4efc-85ab-59c40ad009ee", 00:41:28.900 "aliases": [ 00:41:28.900 "lvs/lvol" 00:41:28.900 ], 00:41:28.900 "product_name": "Logical Volume", 00:41:28.900 "block_size": 4096, 00:41:28.900 "num_blocks": 38912, 00:41:28.900 "uuid": "d5c0077c-1691-4efc-85ab-59c40ad009ee", 00:41:28.900 "assigned_rate_limits": { 00:41:28.900 "rw_ios_per_sec": 0, 00:41:28.900 "rw_mbytes_per_sec": 0, 00:41:28.900 "r_mbytes_per_sec": 0, 00:41:28.900 "w_mbytes_per_sec": 0 00:41:28.900 }, 00:41:28.900 "claimed": false, 00:41:28.900 "zoned": false, 00:41:28.900 "supported_io_types": { 00:41:28.900 "read": true, 00:41:28.900 "write": true, 00:41:28.900 "unmap": true, 00:41:28.900 "flush": false, 00:41:28.900 "reset": true, 00:41:28.900 "nvme_admin": false, 00:41:28.900 "nvme_io": false, 00:41:28.900 "nvme_io_md": false, 00:41:28.900 "write_zeroes": true, 00:41:28.900 "zcopy": false, 00:41:28.900 "get_zone_info": false, 00:41:28.900 "zone_management": false, 00:41:28.900 "zone_append": false, 00:41:28.900 "compare": false, 00:41:28.900 "compare_and_write": false, 00:41:28.900 "abort": false, 00:41:28.900 "seek_hole": true, 00:41:28.900 "seek_data": true, 00:41:28.900 "copy": false, 00:41:28.900 "nvme_iov_md": false 00:41:28.900 }, 00:41:28.900 "driver_specific": { 00:41:28.900 "lvol": { 00:41:28.900 "lvol_store_uuid": "9b322a29-64b0-41e4-ad47-6fd6bf81e27d", 00:41:28.900 "base_bdev": "aio_bdev", 00:41:28.900 "thin_provision": false, 00:41:28.900 "num_allocated_clusters": 38, 00:41:28.900 "snapshot": false, 00:41:28.900 "clone": false, 00:41:28.900 "esnap_clone": false 00:41:28.900 } 00:41:28.900 } 00:41:28.900 } 00:41:28.900 ] 00:41:28.900 10:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:41:28.900 10:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:28.900 10:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:29.466 10:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:29.466 10:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:29.466 10:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:30.034 10:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:30.034 10:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5c0077c-1691-4efc-85ab-59c40ad009ee 00:41:30.608 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b322a29-64b0-41e4-ad47-6fd6bf81e27d 00:41:30.869 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:31.128 00:41:31.128 real 0m27.729s 00:41:31.128 user 0m28.050s 00:41:31.128 sys 0m3.053s 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:31.128 ************************************ 00:41:31.128 END TEST lvs_grow_clean 00:41:31.128 ************************************ 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.128 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:31.387 ************************************ 00:41:31.387 START TEST lvs_grow_dirty 00:41:31.387 ************************************ 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:31.387 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:31.388 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:31.388 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:31.388 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:31.388 10:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:31.954 10:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:31.954 10:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:32.893 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:32.893 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:32.893 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:33.461 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:33.461 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:33.461 10:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ca6cd23-c413-4518-a424-a1744b32e54e lvol 150 00:41:33.720 10:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:33.720 10:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:33.721 10:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:34.289 [2024-12-09 10:52:18.943714] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:34.289 [2024-12-09 10:52:18.943848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:34.548 true 00:41:34.548 10:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:34.548 10:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:35.117 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:35.117 10:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:35.684 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:36.622 10:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:36.881 [2024-12-09 10:52:21.340370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.881 10:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2273820 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2273820 /var/tmp/bdevperf.sock 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2273820 ']' 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:37.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:37.451 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:37.710 [2024-12-09 10:52:22.200903] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:41:37.710 [2024-12-09 10:52:22.201094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273820 ] 00:41:37.969 [2024-12-09 10:52:22.368960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.969 [2024-12-09 10:52:22.490360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:38.227 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:38.227 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:38.227 10:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:39.161 Nvme0n1 00:41:39.161 10:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:40.099 [ 00:41:40.099 { 00:41:40.099 "name": "Nvme0n1", 00:41:40.099 "aliases": [ 00:41:40.099 "89f02827-adf2-4c83-8794-3e87af83ffaf" 00:41:40.099 ], 00:41:40.099 "product_name": "NVMe disk", 00:41:40.099 "block_size": 4096, 00:41:40.099 "num_blocks": 38912, 00:41:40.099 "uuid": "89f02827-adf2-4c83-8794-3e87af83ffaf", 00:41:40.099 "numa_id": 1, 00:41:40.099 "assigned_rate_limits": { 00:41:40.099 "rw_ios_per_sec": 0, 00:41:40.099 "rw_mbytes_per_sec": 0, 00:41:40.099 "r_mbytes_per_sec": 0, 00:41:40.099 "w_mbytes_per_sec": 0 00:41:40.099 }, 00:41:40.099 "claimed": false, 00:41:40.099 "zoned": false, 00:41:40.099 "supported_io_types": { 00:41:40.099 "read": true, 00:41:40.099 "write": true, 00:41:40.099 "unmap": true, 00:41:40.099 "flush": true, 00:41:40.099 "reset": true, 00:41:40.099 "nvme_admin": true, 00:41:40.099 "nvme_io": true, 00:41:40.099 "nvme_io_md": false, 00:41:40.099 "write_zeroes": true, 00:41:40.099 "zcopy": false, 00:41:40.099 "get_zone_info": false, 00:41:40.099 "zone_management": false, 00:41:40.099 "zone_append": false, 00:41:40.099 "compare": true, 00:41:40.099 "compare_and_write": true, 00:41:40.099 "abort": true, 00:41:40.099 "seek_hole": false, 00:41:40.099 "seek_data": false, 00:41:40.099 "copy": true, 00:41:40.099 "nvme_iov_md": false 00:41:40.099 }, 00:41:40.099 "memory_domains": [ 00:41:40.099 { 00:41:40.099 "dma_device_id": "system", 00:41:40.099 "dma_device_type": 1 00:41:40.099 } 00:41:40.099 ], 00:41:40.099 "driver_specific": { 00:41:40.099 "nvme": [ 00:41:40.099 { 00:41:40.099 "trid": { 00:41:40.099 "trtype": "TCP", 00:41:40.099 "adrfam": "IPv4", 00:41:40.099 "traddr": "10.0.0.2", 00:41:40.099 "trsvcid": "4420", 00:41:40.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:40.099 }, 00:41:40.099 "ctrlr_data": { 00:41:40.099 "cntlid": 1, 00:41:40.099 "vendor_id": "0x8086", 00:41:40.099 "model_number": "SPDK bdev Controller", 00:41:40.099 "serial_number": "SPDK0", 00:41:40.099 "firmware_revision": "25.01", 00:41:40.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:40.099 "oacs": { 00:41:40.099 "security": 0, 00:41:40.099 "format": 0, 00:41:40.099 "firmware": 0, 00:41:40.099 "ns_manage": 0 00:41:40.099 }, 00:41:40.099 "multi_ctrlr": true, 00:41:40.099 "ana_reporting": false 00:41:40.099 }, 00:41:40.099 "vs": { 00:41:40.099 "nvme_version": "1.3" 00:41:40.099 }, 00:41:40.099 "ns_data": { 00:41:40.099 "id": 1, 00:41:40.099 "can_share": true 00:41:40.099 } 00:41:40.099 } 00:41:40.099 ], 00:41:40.099 "mp_policy": "active_passive" 00:41:40.099 } 00:41:40.099 } 00:41:40.099 ] 00:41:40.099 10:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2274084 00:41:40.099 10:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:40.099 10:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:40.099 Running I/O for 10 seconds... 00:41:41.479 Latency(us) 00:41:41.479 [2024-12-09T09:52:26.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:41.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:41.479 Nvme0n1 : 1.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:41:41.479 [2024-12-09T09:52:26.133Z] =================================================================================================================== 00:41:41.479 [2024-12-09T09:52:26.133Z] Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:41:41.479 00:41:42.049 10:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:42.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:42.308 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:41:42.308 [2024-12-09T09:52:26.962Z] =================================================================================================================== 00:41:42.308 [2024-12-09T09:52:26.962Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:41:42.308 00:41:42.308 true 00:41:42.308 10:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:42.308 10:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:42.876 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:42.876 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:42.877 10:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2274084 00:41:43.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:43.138 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:41:43.138 [2024-12-09T09:52:27.792Z] =================================================================================================================== 00:41:43.138 [2024-12-09T09:52:27.792Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:41:43.138 00:41:44.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:44.078 Nvme0n1 : 4.00 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:41:44.078 [2024-12-09T09:52:28.732Z] =================================================================================================================== 00:41:44.078 [2024-12-09T09:52:28.732Z] Total : 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:41:44.078 00:41:45.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:45.459 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:41:45.459 [2024-12-09T09:52:30.113Z] =================================================================================================================== 00:41:45.459 [2024-12-09T09:52:30.113Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:41:45.459 00:41:46.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:46.396 Nvme0n1 : 6.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:41:46.396 [2024-12-09T09:52:31.050Z] =================================================================================================================== 00:41:46.396 [2024-12-09T09:52:31.050Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:41:46.396 00:41:47.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:47.334 Nvme0n1 : 7.00 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:41:47.334 [2024-12-09T09:52:31.988Z] =================================================================================================================== 00:41:47.334 [2024-12-09T09:52:31.988Z] Total : 6458.86 25.23 0.00 0.00 0.00 0.00 0.00 00:41:47.334 00:41:48.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.272 Nvme0n1 : 8.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:41:48.272 [2024-12-09T09:52:32.926Z] =================================================================================================================== 00:41:48.272 [2024-12-09T09:52:32.926Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:41:48.272 00:41:49.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:49.209 Nvme0n1 : 9.00 6491.11 25.36 0.00 0.00 0.00 0.00 0.00 00:41:49.209 [2024-12-09T09:52:33.863Z] =================================================================================================================== 00:41:49.209 [2024-12-09T09:52:33.863Z] Total : 6491.11 25.36 0.00 0.00 0.00 0.00 0.00 00:41:49.209 00:41:50.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:50.148 Nvme0n1 : 10.00 6494.80 25.37 0.00 0.00 0.00 0.00 0.00 00:41:50.148 [2024-12-09T09:52:34.802Z] =================================================================================================================== 00:41:50.148 [2024-12-09T09:52:34.802Z] Total : 6494.80 25.37 0.00 0.00 0.00 0.00 0.00 00:41:50.148 00:41:50.148 00:41:50.148 Latency(us) 00:41:50.148 [2024-12-09T09:52:34.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:50.148 Nvme0n1 : 10.01 6496.15 25.38 0.00 0.00 19687.24 10097.40 42719.76 00:41:50.148 [2024-12-09T09:52:34.802Z] =================================================================================================================== 00:41:50.148 [2024-12-09T09:52:34.802Z] Total : 6496.15 25.38 0.00 0.00 19687.24 10097.40 42719.76 00:41:50.148 { 00:41:50.148 "results": [ 00:41:50.148 { 00:41:50.148 "job": "Nvme0n1", 00:41:50.148 "core_mask": "0x2", 00:41:50.148 "workload": "randwrite", 00:41:50.148 "status": "finished", 00:41:50.148 "queue_depth": 128, 00:41:50.148 "io_size": 4096, 00:41:50.148 "runtime": 10.009774, 00:41:50.148 "iops": 6496.150662342627, 00:41:50.148 "mibps": 25.375588524775885, 00:41:50.148 "io_failed": 0, 00:41:50.148 "io_timeout": 0, 00:41:50.148 "avg_latency_us": 19687.237248716305, 00:41:50.148 "min_latency_us": 10097.39851851852, 00:41:50.148 "max_latency_us": 42719.76296296297 00:41:50.148 } 00:41:50.148 ], 00:41:50.148 "core_count": 1 00:41:50.148 } 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2273820 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2273820 ']' 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2273820 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:50.148 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273820 00:41:50.408 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:50.408 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:50.408 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273820' 00:41:50.408 killing process with pid 2273820 00:41:50.408 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2273820 00:41:50.408 Received shutdown signal, test time was about 10.000000 seconds 00:41:50.408 00:41:50.408 Latency(us) 00:41:50.408 [2024-12-09T09:52:35.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:50.408 [2024-12-09T09:52:35.062Z] =================================================================================================================== 00:41:50.408 [2024-12-09T09:52:35.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:50.408 10:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2273820 00:41:50.666 10:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:51.609 10:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:52.177 10:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:52.177 10:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2269753 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2269753 00:41:52.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2269753 Killed "${NVMF_APP[@]}" "$@" 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2275528 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2275528 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2275528 ']' 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:52.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:52.744 10:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:53.003 [2024-12-09 10:52:37.477840] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:53.003 [2024-12-09 10:52:37.480577] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:41:53.003 [2024-12-09 10:52:37.480710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:53.262 [2024-12-09 10:52:37.664011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:53.262 [2024-12-09 10:52:37.780465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:53.262 [2024-12-09 10:52:37.780584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:53.262 [2024-12-09 10:52:37.780620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:53.262 [2024-12-09 10:52:37.780660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:53.262 [2024-12-09 10:52:37.780671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:53.262 [2024-12-09 10:52:37.781454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:53.523 [2024-12-09 10:52:37.956881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:53.523 [2024-12-09 10:52:37.957292] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:53.523 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:54.093 [2024-12-09 10:52:38.723710] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:54.093 [2024-12-09 10:52:38.723978] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:54.093 [2024-12-09 10:52:38.724040] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:54.353 10:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:54.922 10:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89f02827-adf2-4c83-8794-3e87af83ffaf -t 2000 00:41:55.492 [ 00:41:55.492 { 00:41:55.492 "name": "89f02827-adf2-4c83-8794-3e87af83ffaf", 00:41:55.492 "aliases": [ 00:41:55.492 "lvs/lvol" 00:41:55.492 ], 00:41:55.492 "product_name": "Logical Volume", 00:41:55.492 "block_size": 4096, 00:41:55.492 "num_blocks": 38912, 00:41:55.492 "uuid": "89f02827-adf2-4c83-8794-3e87af83ffaf", 00:41:55.492 "assigned_rate_limits": { 00:41:55.492 "rw_ios_per_sec": 0, 00:41:55.492 "rw_mbytes_per_sec": 0, 00:41:55.492 "r_mbytes_per_sec": 0, 00:41:55.492 "w_mbytes_per_sec": 0 00:41:55.492 }, 00:41:55.492 "claimed": false, 00:41:55.492 "zoned": false, 00:41:55.492 "supported_io_types": { 00:41:55.492 "read": true, 00:41:55.492 "write": true, 00:41:55.492 "unmap": true, 00:41:55.492 "flush": false, 00:41:55.492 "reset": true, 00:41:55.492 "nvme_admin": false, 00:41:55.492 "nvme_io": false, 00:41:55.492 "nvme_io_md": false, 00:41:55.492 "write_zeroes": true, 00:41:55.492 "zcopy": false, 00:41:55.492 "get_zone_info": false, 00:41:55.493 "zone_management": false, 00:41:55.493 "zone_append": false, 00:41:55.493 "compare": false, 00:41:55.493 "compare_and_write": false, 00:41:55.493 "abort": false, 00:41:55.493 "seek_hole": true, 00:41:55.493 "seek_data": true, 00:41:55.493 "copy": false, 00:41:55.493 "nvme_iov_md": false 00:41:55.493 }, 00:41:55.493 "driver_specific": { 00:41:55.493 "lvol": { 00:41:55.493 "lvol_store_uuid": "1ca6cd23-c413-4518-a424-a1744b32e54e", 00:41:55.493 "base_bdev": "aio_bdev", 00:41:55.493 "thin_provision": false, 00:41:55.493 "num_allocated_clusters": 38, 00:41:55.493 "snapshot": false, 00:41:55.493 "clone": false, 00:41:55.493 "esnap_clone": false 00:41:55.493 } 00:41:55.493 } 00:41:55.493 } 00:41:55.493 ] 00:41:55.752 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:55.752 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:55.752 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:56.320 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:56.320 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:56.320 10:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:56.888 10:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:56.888 10:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:57.825 [2024-12-09 10:52:42.154339] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:57.825 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:41:58.393 request: 00:41:58.393 { 00:41:58.393 "uuid": "1ca6cd23-c413-4518-a424-a1744b32e54e", 00:41:58.393 "method": "bdev_lvol_get_lvstores", 00:41:58.393 "req_id": 1 00:41:58.393 } 00:41:58.393 Got JSON-RPC error response 00:41:58.393 response: 00:41:58.393 { 00:41:58.393 "code": -19, 00:41:58.393 "message": "No such device" 00:41:58.393 } 00:41:58.393 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:58.393 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:58.393 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:58.393 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:58.393 10:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:58.652 aio_bdev 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=89f02827-adf2-4c83-8794-3e87af83ffaf 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:58.652 10:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:59.590 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89f02827-adf2-4c83-8794-3e87af83ffaf -t 2000 00:42:00.161 [ 00:42:00.161 { 00:42:00.161 "name": "89f02827-adf2-4c83-8794-3e87af83ffaf", 00:42:00.161 "aliases": [ 00:42:00.161 "lvs/lvol" 00:42:00.161 ], 00:42:00.161 "product_name": "Logical Volume", 00:42:00.161 "block_size": 4096, 00:42:00.161 "num_blocks": 38912, 00:42:00.161 "uuid": "89f02827-adf2-4c83-8794-3e87af83ffaf", 00:42:00.161 "assigned_rate_limits": { 00:42:00.161 "rw_ios_per_sec": 0, 00:42:00.161 "rw_mbytes_per_sec": 0, 00:42:00.161 "r_mbytes_per_sec": 0, 00:42:00.161 "w_mbytes_per_sec": 0 00:42:00.161 }, 00:42:00.161 "claimed": false, 00:42:00.161 "zoned": false, 00:42:00.161 "supported_io_types": { 00:42:00.161 "read": true, 00:42:00.161 "write": true, 00:42:00.161 "unmap": true, 00:42:00.161 "flush": false, 00:42:00.161 "reset": true, 00:42:00.161 "nvme_admin": false, 00:42:00.161 "nvme_io": false, 00:42:00.161 "nvme_io_md": false, 00:42:00.162 "write_zeroes": true, 00:42:00.162 "zcopy": false, 00:42:00.162 "get_zone_info": false, 00:42:00.162 "zone_management": false, 00:42:00.162 "zone_append": false, 00:42:00.162 "compare": false, 00:42:00.162 "compare_and_write": false, 00:42:00.162 "abort": false, 00:42:00.162 "seek_hole": true, 00:42:00.162 "seek_data": true, 00:42:00.162 "copy": false, 00:42:00.162 "nvme_iov_md": false 00:42:00.162 }, 00:42:00.162 "driver_specific": { 00:42:00.162 "lvol": { 00:42:00.162 "lvol_store_uuid": "1ca6cd23-c413-4518-a424-a1744b32e54e", 00:42:00.162 "base_bdev": "aio_bdev", 00:42:00.162 "thin_provision": false, 00:42:00.162 "num_allocated_clusters": 38, 00:42:00.162 "snapshot": false, 00:42:00.162 "clone": false, 00:42:00.162 "esnap_clone": false 00:42:00.162 } 00:42:00.162 } 00:42:00.162 } 00:42:00.162 ] 00:42:00.162 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:42:00.162 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:42:00.162 10:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:00.732 10:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:00.732 10:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:42:00.732 10:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:01.300 10:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:01.300 10:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89f02827-adf2-4c83-8794-3e87af83ffaf 00:42:01.870 10:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ca6cd23-c413-4518-a424-a1744b32e54e 00:42:02.815 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:03.384 00:42:03.384 real 0m32.084s 00:42:03.384 user 0m49.127s 00:42:03.384 sys 0m6.789s 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:03.384 ************************************ 00:42:03.384 END TEST lvs_grow_dirty 00:42:03.384 ************************************ 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:03.384 nvmf_trace.0 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:03.384 10:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:03.384 rmmod nvme_tcp 00:42:03.384 rmmod nvme_fabrics 00:42:03.384 rmmod nvme_keyring 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2275528 ']' 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2275528 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2275528 ']' 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2275528 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275528 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275528' 00:42:03.644 killing process with pid 2275528 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2275528 00:42:03.644 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2275528 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:03.905 10:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:06.447 00:42:06.447 real 1m7.249s 00:42:06.447 user 1m20.187s 00:42:06.447 sys 0m13.007s 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:06.447 ************************************ 00:42:06.447 END TEST nvmf_lvs_grow 00:42:06.447 ************************************ 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:06.447 ************************************ 00:42:06.447 START TEST nvmf_bdev_io_wait 00:42:06.447 ************************************ 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:06.447 * Looking for test storage... 00:42:06.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.447 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.448 --rc genhtml_branch_coverage=1 00:42:06.448 --rc genhtml_function_coverage=1 00:42:06.448 --rc genhtml_legend=1 00:42:06.448 --rc geninfo_all_blocks=1 00:42:06.448 --rc geninfo_unexecuted_blocks=1 00:42:06.448 00:42:06.448 ' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.448 --rc genhtml_branch_coverage=1 00:42:06.448 --rc genhtml_function_coverage=1 00:42:06.448 --rc genhtml_legend=1 00:42:06.448 --rc geninfo_all_blocks=1 00:42:06.448 --rc geninfo_unexecuted_blocks=1 00:42:06.448 00:42:06.448 ' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.448 --rc genhtml_branch_coverage=1 00:42:06.448 --rc genhtml_function_coverage=1 00:42:06.448 --rc genhtml_legend=1 00:42:06.448 --rc geninfo_all_blocks=1 00:42:06.448 --rc geninfo_unexecuted_blocks=1 00:42:06.448 00:42:06.448 ' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.448 --rc genhtml_branch_coverage=1 00:42:06.448 --rc genhtml_function_coverage=1 00:42:06.448 --rc genhtml_legend=1 00:42:06.448 --rc geninfo_all_blocks=1 00:42:06.448 --rc geninfo_unexecuted_blocks=1 00:42:06.448 00:42:06.448 ' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:06.448 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:06.449 10:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.740 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:09.740 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:09.741 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:09.741 Found net devices under 0000:84:00.0: cvl_0_0 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:09.741 Found net devices under 0000:84:00.1: cvl_0_1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:09.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:09.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:42:09.741 00:42:09.741 --- 10.0.0.2 ping statistics --- 00:42:09.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.741 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:09.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:09.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:42:09.741 00:42:09.741 --- 10.0.0.1 ping statistics --- 00:42:09.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.741 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2278838 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2278838 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2278838 ']' 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:09.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:09.741 10:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:09.741 [2024-12-09 10:52:54.013478] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:09.741 [2024-12-09 10:52:54.016272] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:09.741 [2024-12-09 10:52:54.016405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:09.741 [2024-12-09 10:52:54.198872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:09.741 [2024-12-09 10:52:54.323572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:09.741 [2024-12-09 10:52:54.323688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:09.742 [2024-12-09 10:52:54.323740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:09.742 [2024-12-09 10:52:54.323775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:09.742 [2024-12-09 10:52:54.323801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:09.742 [2024-12-09 10:52:54.327336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:09.742 [2024-12-09 10:52:54.327438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:09.742 [2024-12-09 10:52:54.327526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:09.742 [2024-12-09 10:52:54.327530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.742 [2024-12-09 10:52:54.328079] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.001 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.260 [2024-12-09 10:52:54.748624] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:10.260 [2024-12-09 10:52:54.749822] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:10.260 [2024-12-09 10:52:54.750291] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:10.260 [2024-12-09 10:52:54.751586] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:10.260 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.260 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:10.260 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.260 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.261 [2024-12-09 10:52:54.760548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.261 Malloc0 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:10.261 [2024-12-09 10:52:54.836456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2278874 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2278876 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:10.261 { 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme$subsystem", 00:42:10.261 "trtype": "$TEST_TRANSPORT", 00:42:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "$NVMF_PORT", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:10.261 "hdgst": ${hdgst:-false}, 00:42:10.261 "ddgst": ${ddgst:-false} 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 } 00:42:10.261 EOF 00:42:10.261 )") 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2278878 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:10.261 { 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme$subsystem", 00:42:10.261 "trtype": "$TEST_TRANSPORT", 00:42:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "$NVMF_PORT", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:10.261 "hdgst": ${hdgst:-false}, 00:42:10.261 "ddgst": ${ddgst:-false} 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 } 00:42:10.261 EOF 00:42:10.261 )") 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2278881 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:10.261 { 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme$subsystem", 00:42:10.261 "trtype": "$TEST_TRANSPORT", 00:42:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "$NVMF_PORT", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:10.261 "hdgst": ${hdgst:-false}, 00:42:10.261 "ddgst": ${ddgst:-false} 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 } 00:42:10.261 EOF 00:42:10.261 )") 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:10.261 { 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme$subsystem", 00:42:10.261 "trtype": "$TEST_TRANSPORT", 00:42:10.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "$NVMF_PORT", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:10.261 "hdgst": ${hdgst:-false}, 00:42:10.261 "ddgst": ${ddgst:-false} 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 } 00:42:10.261 EOF 00:42:10.261 )") 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2278874 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme1", 00:42:10.261 "trtype": "tcp", 00:42:10.261 "traddr": "10.0.0.2", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "4420", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:10.261 "hdgst": false, 00:42:10.261 "ddgst": false 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 }' 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme1", 00:42:10.261 "trtype": "tcp", 00:42:10.261 "traddr": "10.0.0.2", 00:42:10.261 "adrfam": "ipv4", 00:42:10.261 "trsvcid": "4420", 00:42:10.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:10.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:10.261 "hdgst": false, 00:42:10.261 "ddgst": false 00:42:10.261 }, 00:42:10.261 "method": "bdev_nvme_attach_controller" 00:42:10.261 }' 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:10.261 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:10.261 "params": { 00:42:10.261 "name": "Nvme1", 00:42:10.262 "trtype": "tcp", 00:42:10.262 "traddr": "10.0.0.2", 00:42:10.262 "adrfam": "ipv4", 00:42:10.262 "trsvcid": "4420", 00:42:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:10.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:10.262 "hdgst": false, 00:42:10.262 "ddgst": false 00:42:10.262 }, 00:42:10.262 "method": "bdev_nvme_attach_controller" 00:42:10.262 }' 00:42:10.262 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:42:10.262 10:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:10.262 "params": { 00:42:10.262 "name": "Nvme1", 00:42:10.262 "trtype": "tcp", 00:42:10.262 "traddr": "10.0.0.2", 00:42:10.262 "adrfam": "ipv4", 00:42:10.262 "trsvcid": "4420", 00:42:10.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:10.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:10.262 "hdgst": false, 00:42:10.262 "ddgst": false 00:42:10.262 }, 00:42:10.262 "method": "bdev_nvme_attach_controller" 00:42:10.262 }' 00:42:10.262 [2024-12-09 10:52:54.892864] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:10.262 [2024-12-09 10:52:54.892957] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:10.262 [2024-12-09 10:52:54.896337] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:10.262 [2024-12-09 10:52:54.896335] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:10.262 [2024-12-09 10:52:54.896333] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:10.262 [2024-12-09 10:52:54.896438] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:52:54.896439] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:52:54.896439] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:10.262 --proc-type=auto ] 00:42:10.262 --proc-type=auto ] 00:42:10.529 [2024-12-09 10:52:55.060574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.529 [2024-12-09 10:52:55.112791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:10.789 [2024-12-09 10:52:55.192854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.789 [2024-12-09 10:52:55.251529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:10.789 [2024-12-09 10:52:55.304061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.789 [2024-12-09 10:52:55.362069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:10.789 [2024-12-09 10:52:55.441384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:11.048 [2024-12-09 10:52:55.498275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:11.049 Running I/O for 1 seconds... 00:42:11.307 Running I/O for 1 seconds... 00:42:11.307 Running I/O for 1 seconds... 00:42:11.307 Running I/O for 1 seconds... 00:42:12.243 7083.00 IOPS, 27.67 MiB/s 00:42:12.243 Latency(us) 00:42:12.243 [2024-12-09T09:52:56.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.243 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:12.243 Nvme1n1 : 1.02 7115.39 27.79 0.00 0.00 17850.21 2172.40 28544.57 00:42:12.243 [2024-12-09T09:52:56.897Z] =================================================================================================================== 00:42:12.243 [2024-12-09T09:52:56.897Z] Total : 7115.39 27.79 0.00 0.00 17850.21 2172.40 28544.57 00:42:12.243 9242.00 IOPS, 36.10 MiB/s 00:42:12.243 Latency(us) 00:42:12.243 [2024-12-09T09:52:56.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.243 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:12.243 Nvme1n1 : 1.01 9284.19 36.27 0.00 0.00 13718.75 5097.24 18932.62 00:42:12.243 [2024-12-09T09:52:56.897Z] =================================================================================================================== 00:42:12.243 [2024-12-09T09:52:56.897Z] Total : 9284.19 36.27 0.00 0.00 13718.75 5097.24 18932.62 00:42:12.243 7031.00 IOPS, 27.46 MiB/s 00:42:12.243 Latency(us) 00:42:12.243 [2024-12-09T09:52:56.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.243 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:12.243 Nvme1n1 : 1.01 7158.38 27.96 0.00 0.00 17833.61 3737.98 37671.06 00:42:12.243 [2024-12-09T09:52:56.897Z] =================================================================================================================== 00:42:12.243 [2024-12-09T09:52:56.897Z] Total : 7158.38 27.96 0.00 0.00 17833.61 3737.98 37671.06 00:42:12.243 191856.00 IOPS, 749.44 MiB/s 00:42:12.243 Latency(us) 00:42:12.243 [2024-12-09T09:52:56.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:12.243 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:12.243 Nvme1n1 : 1.00 191499.39 748.04 0.00 0.00 664.71 292.79 1844.72 00:42:12.243 [2024-12-09T09:52:56.897Z] =================================================================================================================== 00:42:12.243 [2024-12-09T09:52:56.897Z] Total : 191499.39 748.04 0.00 0.00 664.71 292.79 1844.72 00:42:12.243 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2278876 00:42:12.501 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2278878 00:42:12.501 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2278881 00:42:12.501 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:12.501 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.501 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:12.502 10:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:12.502 rmmod nvme_tcp 00:42:12.502 rmmod nvme_fabrics 00:42:12.502 rmmod nvme_keyring 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2278838 ']' 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2278838 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2278838 ']' 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2278838 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278838 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278838' 00:42:12.502 killing process with pid 2278838 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2278838 00:42:12.502 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2278838 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:12.760 10:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:15.293 00:42:15.293 real 0m8.792s 00:42:15.293 user 0m15.969s 00:42:15.293 sys 0m4.935s 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:15.293 ************************************ 00:42:15.293 END TEST nvmf_bdev_io_wait 00:42:15.293 ************************************ 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:15.293 ************************************ 00:42:15.293 START TEST nvmf_queue_depth 00:42:15.293 ************************************ 00:42:15.293 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:15.293 * Looking for test storage... 00:42:15.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:15.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.294 --rc genhtml_branch_coverage=1 00:42:15.294 --rc genhtml_function_coverage=1 00:42:15.294 --rc genhtml_legend=1 00:42:15.294 --rc geninfo_all_blocks=1 00:42:15.294 --rc geninfo_unexecuted_blocks=1 00:42:15.294 00:42:15.294 ' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:15.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.294 --rc genhtml_branch_coverage=1 00:42:15.294 --rc genhtml_function_coverage=1 00:42:15.294 --rc genhtml_legend=1 00:42:15.294 --rc geninfo_all_blocks=1 00:42:15.294 --rc geninfo_unexecuted_blocks=1 00:42:15.294 00:42:15.294 ' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:15.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.294 --rc genhtml_branch_coverage=1 00:42:15.294 --rc genhtml_function_coverage=1 00:42:15.294 --rc genhtml_legend=1 00:42:15.294 --rc geninfo_all_blocks=1 00:42:15.294 --rc geninfo_unexecuted_blocks=1 00:42:15.294 00:42:15.294 ' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:15.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.294 --rc genhtml_branch_coverage=1 00:42:15.294 --rc genhtml_function_coverage=1 00:42:15.294 --rc genhtml_legend=1 00:42:15.294 --rc geninfo_all_blocks=1 00:42:15.294 --rc geninfo_unexecuted_blocks=1 00:42:15.294 00:42:15.294 ' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.294 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:15.295 10:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.828 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:17.829 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:17.829 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:17.829 Found net devices under 0000:84:00.0: cvl_0_0 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:17.829 Found net devices under 0000:84:00.1: cvl_0_1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:42:17.829 00:42:17.829 --- 10.0.0.2 ping statistics --- 00:42:17.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.829 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:42:17.829 00:42:17.829 --- 10.0.0.1 ping statistics --- 00:42:17.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.829 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2281233 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2281233 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2281233 ']' 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.829 10:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:18.088 [2024-12-09 10:53:02.535827] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:18.088 [2024-12-09 10:53:02.538532] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:18.088 [2024-12-09 10:53:02.538654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:18.088 [2024-12-09 10:53:02.732063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.348 [2024-12-09 10:53:02.844248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:18.348 [2024-12-09 10:53:02.844370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:18.348 [2024-12-09 10:53:02.844407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:18.348 [2024-12-09 10:53:02.844449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:18.348 [2024-12-09 10:53:02.844476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:18.348 [2024-12-09 10:53:02.845806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.609 [2024-12-09 10:53:03.025554] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:18.609 [2024-12-09 10:53:03.026205] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 [2024-12-09 10:53:03.987110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.551 10:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 Malloc0 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 [2024-12-09 10:53:04.079321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2281388 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2281388 /var/tmp/bdevperf.sock 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2281388 ']' 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:19.551 10:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:19.551 [2024-12-09 10:53:04.188299] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:19.551 [2024-12-09 10:53:04.188464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281388 ] 00:42:19.809 [2024-12-09 10:53:04.358789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:20.070 [2024-12-09 10:53:04.471142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.015 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:21.015 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:42:21.015 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:21.015 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.015 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:21.275 NVMe0n1 00:42:21.276 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.276 10:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:21.276 Running I/O for 10 seconds... 00:42:23.592 3072.00 IOPS, 12.00 MiB/s [2024-12-09T09:53:09.187Z] 3579.00 IOPS, 13.98 MiB/s [2024-12-09T09:53:10.126Z] 3557.67 IOPS, 13.90 MiB/s [2024-12-09T09:53:11.066Z] 3584.25 IOPS, 14.00 MiB/s [2024-12-09T09:53:12.006Z] 3678.00 IOPS, 14.37 MiB/s [2024-12-09T09:53:12.945Z] 3639.17 IOPS, 14.22 MiB/s [2024-12-09T09:53:14.324Z] 3657.29 IOPS, 14.29 MiB/s [2024-12-09T09:53:14.894Z] 3663.12 IOPS, 14.31 MiB/s [2024-12-09T09:53:16.273Z] 3642.11 IOPS, 14.23 MiB/s [2024-12-09T09:53:16.273Z] 3685.30 IOPS, 14.40 MiB/s 00:42:31.619 Latency(us) 00:42:31.619 [2024-12-09T09:53:16.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.619 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:31.619 Verification LBA range: start 0x0 length 0x4000 00:42:31.619 NVMe0n1 : 10.23 3703.98 14.47 0.00 0.00 274058.81 54758.97 161558.38 00:42:31.619 [2024-12-09T09:53:16.273Z] =================================================================================================================== 00:42:31.619 [2024-12-09T09:53:16.273Z] Total : 3703.98 14.47 0.00 0.00 274058.81 54758.97 161558.38 00:42:31.619 { 00:42:31.619 "results": [ 00:42:31.619 { 00:42:31.619 "job": "NVMe0n1", 00:42:31.619 "core_mask": "0x1", 00:42:31.619 "workload": "verify", 00:42:31.619 "status": "finished", 00:42:31.619 "verify_range": { 00:42:31.619 "start": 0, 00:42:31.619 "length": 16384 00:42:31.619 }, 00:42:31.619 "queue_depth": 1024, 00:42:31.619 "io_size": 4096, 00:42:31.619 "runtime": 10.226016, 00:42:31.619 "iops": 3703.9840344470417, 00:42:31.619 "mibps": 14.468687634558757, 00:42:31.619 "io_failed": 0, 00:42:31.619 "io_timeout": 0, 00:42:31.619 "avg_latency_us": 274058.8086485789, 00:42:31.619 "min_latency_us": 54758.96888888889, 00:42:31.619 "max_latency_us": 161558.3762962963 00:42:31.619 } 00:42:31.619 ], 00:42:31.619 "core_count": 1 00:42:31.619 } 00:42:31.619 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2281388 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2281388 ']' 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2281388 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281388 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281388' 00:42:31.620 killing process with pid 2281388 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2281388 00:42:31.620 Received shutdown signal, test time was about 10.000000 seconds 00:42:31.620 00:42:31.620 Latency(us) 00:42:31.620 [2024-12-09T09:53:16.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:31.620 [2024-12-09T09:53:16.274Z] =================================================================================================================== 00:42:31.620 [2024-12-09T09:53:16.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:31.620 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2281388 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:32.188 rmmod nvme_tcp 00:42:32.188 rmmod nvme_fabrics 00:42:32.188 rmmod nvme_keyring 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2281233 ']' 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2281233 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2281233 ']' 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2281233 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281233 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281233' 00:42:32.188 killing process with pid 2281233 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2281233 00:42:32.188 10:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2281233 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:32.757 10:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.664 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:34.664 00:42:34.664 real 0m19.674s 00:42:34.664 user 0m26.206s 00:42:34.664 sys 0m4.950s 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:34.665 ************************************ 00:42:34.665 END TEST nvmf_queue_depth 00:42:34.665 ************************************ 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:34.665 ************************************ 00:42:34.665 START TEST nvmf_target_multipath 00:42:34.665 ************************************ 00:42:34.665 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:34.923 * Looking for test storage... 00:42:34.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:34.923 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:34.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.924 --rc genhtml_branch_coverage=1 00:42:34.924 --rc genhtml_function_coverage=1 00:42:34.924 --rc genhtml_legend=1 00:42:34.924 --rc geninfo_all_blocks=1 00:42:34.924 --rc geninfo_unexecuted_blocks=1 00:42:34.924 00:42:34.924 ' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:34.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.924 --rc genhtml_branch_coverage=1 00:42:34.924 --rc genhtml_function_coverage=1 00:42:34.924 --rc genhtml_legend=1 00:42:34.924 --rc geninfo_all_blocks=1 00:42:34.924 --rc geninfo_unexecuted_blocks=1 00:42:34.924 00:42:34.924 ' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:34.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.924 --rc genhtml_branch_coverage=1 00:42:34.924 --rc genhtml_function_coverage=1 00:42:34.924 --rc genhtml_legend=1 00:42:34.924 --rc geninfo_all_blocks=1 00:42:34.924 --rc geninfo_unexecuted_blocks=1 00:42:34.924 00:42:34.924 ' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:34.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.924 --rc genhtml_branch_coverage=1 00:42:34.924 --rc genhtml_function_coverage=1 00:42:34.924 --rc genhtml_legend=1 00:42:34.924 --rc geninfo_all_blocks=1 00:42:34.924 --rc geninfo_unexecuted_blocks=1 00:42:34.924 00:42:34.924 ' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:34.924 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:34.925 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.925 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:34.925 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:34.925 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:34.925 10:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:38.218 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:38.218 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:38.218 Found net devices under 0000:84:00.0: cvl_0_0 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:38.218 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:38.219 Found net devices under 0000:84:00.1: cvl_0_1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:38.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:38.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:42:38.219 00:42:38.219 --- 10.0.0.2 ping statistics --- 00:42:38.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:38.219 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:38.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:38.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:42:38.219 00:42:38.219 --- 10.0.0.1 ping statistics --- 00:42:38.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:38.219 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:38.219 only one NIC for nvmf test 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:38.219 rmmod nvme_tcp 00:42:38.219 rmmod nvme_fabrics 00:42:38.219 rmmod nvme_keyring 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:38.219 10:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:40.129 00:42:40.129 real 0m5.516s 00:42:40.129 user 0m1.188s 00:42:40.129 sys 0m2.341s 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:40.129 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:40.129 ************************************ 00:42:40.129 END TEST nvmf_target_multipath 00:42:40.129 ************************************ 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:40.389 ************************************ 00:42:40.389 START TEST nvmf_zcopy 00:42:40.389 ************************************ 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:40.389 * Looking for test storage... 00:42:40.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:40.389 10:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.650 --rc genhtml_branch_coverage=1 00:42:40.650 --rc genhtml_function_coverage=1 00:42:40.650 --rc genhtml_legend=1 00:42:40.650 --rc geninfo_all_blocks=1 00:42:40.650 --rc geninfo_unexecuted_blocks=1 00:42:40.650 00:42:40.650 ' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.650 --rc genhtml_branch_coverage=1 00:42:40.650 --rc genhtml_function_coverage=1 00:42:40.650 --rc genhtml_legend=1 00:42:40.650 --rc geninfo_all_blocks=1 00:42:40.650 --rc geninfo_unexecuted_blocks=1 00:42:40.650 00:42:40.650 ' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.650 --rc genhtml_branch_coverage=1 00:42:40.650 --rc genhtml_function_coverage=1 00:42:40.650 --rc genhtml_legend=1 00:42:40.650 --rc geninfo_all_blocks=1 00:42:40.650 --rc geninfo_unexecuted_blocks=1 00:42:40.650 00:42:40.650 ' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:40.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:40.650 --rc genhtml_branch_coverage=1 00:42:40.650 --rc genhtml_function_coverage=1 00:42:40.650 --rc genhtml_legend=1 00:42:40.650 --rc geninfo_all_blocks=1 00:42:40.650 --rc geninfo_unexecuted_blocks=1 00:42:40.650 00:42:40.650 ' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:40.650 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:40.651 10:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:42:43.950 Found 0000:84:00.0 (0x8086 - 0x159b) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:42:43.950 Found 0000:84:00.1 (0x8086 - 0x159b) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:42:43.950 Found net devices under 0000:84:00.0: cvl_0_0 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:42:43.950 Found net devices under 0000:84:00.1: cvl_0_1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:43.950 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:43.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:43.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:42:43.951 00:42:43.951 --- 10.0.0.2 ping statistics --- 00:42:43.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.951 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:43.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:43.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:42:43.951 00:42:43.951 --- 10.0.0.1 ping statistics --- 00:42:43.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.951 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2286881 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2286881 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2286881 ']' 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:43.951 10:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:43.951 [2024-12-09 10:53:28.542441] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:43.951 [2024-12-09 10:53:28.543797] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:43.951 [2024-12-09 10:53:28.543863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:44.211 [2024-12-09 10:53:28.676589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.211 [2024-12-09 10:53:28.782092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:44.211 [2024-12-09 10:53:28.782200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:44.211 [2024-12-09 10:53:28.782238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:44.211 [2024-12-09 10:53:28.782294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:44.211 [2024-12-09 10:53:28.782322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:44.211 [2024-12-09 10:53:28.783513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:44.470 [2024-12-09 10:53:28.958223] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:44.470 [2024-12-09 10:53:28.958922] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 [2024-12-09 10:53:29.220632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 [2024-12-09 10:53:29.240897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 malloc0 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.731 { 00:42:44.731 "params": { 00:42:44.731 "name": "Nvme$subsystem", 00:42:44.731 "trtype": "$TEST_TRANSPORT", 00:42:44.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.731 "adrfam": "ipv4", 00:42:44.731 "trsvcid": "$NVMF_PORT", 00:42:44.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.731 "hdgst": ${hdgst:-false}, 00:42:44.731 "ddgst": ${ddgst:-false} 00:42:44.731 }, 00:42:44.731 "method": "bdev_nvme_attach_controller" 00:42:44.731 } 00:42:44.731 EOF 00:42:44.731 )") 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:44.731 10:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:44.731 "params": { 00:42:44.731 "name": "Nvme1", 00:42:44.731 "trtype": "tcp", 00:42:44.731 "traddr": "10.0.0.2", 00:42:44.731 "adrfam": "ipv4", 00:42:44.731 "trsvcid": "4420", 00:42:44.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:44.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:44.731 "hdgst": false, 00:42:44.731 "ddgst": false 00:42:44.731 }, 00:42:44.731 "method": "bdev_nvme_attach_controller" 00:42:44.731 }' 00:42:44.992 [2024-12-09 10:53:29.390889] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:44.992 [2024-12-09 10:53:29.390983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287020 ] 00:42:44.992 [2024-12-09 10:53:29.555357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.251 [2024-12-09 10:53:29.670555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.510 Running I/O for 10 seconds... 00:42:47.825 2494.00 IOPS, 19.48 MiB/s [2024-12-09T09:53:33.424Z] 2433.50 IOPS, 19.01 MiB/s [2024-12-09T09:53:34.360Z] 2464.00 IOPS, 19.25 MiB/s [2024-12-09T09:53:35.300Z] 2485.75 IOPS, 19.42 MiB/s [2024-12-09T09:53:36.235Z] 2499.00 IOPS, 19.52 MiB/s [2024-12-09T09:53:37.172Z] 2513.83 IOPS, 19.64 MiB/s [2024-12-09T09:53:38.107Z] 2505.14 IOPS, 19.57 MiB/s [2024-12-09T09:53:39.489Z] 2696.25 IOPS, 21.06 MiB/s [2024-12-09T09:53:40.427Z] 2670.78 IOPS, 20.87 MiB/s [2024-12-09T09:53:40.427Z] 2773.50 IOPS, 21.67 MiB/s 00:42:55.773 Latency(us) 00:42:55.773 [2024-12-09T09:53:40.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:55.773 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:55.773 Verification LBA range: start 0x0 length 0x1000 00:42:55.773 Nvme1n1 : 10.02 2778.93 21.71 0.00 0.00 45918.79 2160.26 65244.73 00:42:55.773 [2024-12-09T09:53:40.427Z] =================================================================================================================== 00:42:55.773 [2024-12-09T09:53:40.427Z] Total : 2778.93 21.71 0.00 0.00 45918.79 2160.26 65244.73 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2288201 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.773 { 00:42:55.773 "params": { 00:42:55.773 "name": "Nvme$subsystem", 00:42:55.773 "trtype": "$TEST_TRANSPORT", 00:42:55.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.773 "adrfam": "ipv4", 00:42:55.773 "trsvcid": "$NVMF_PORT", 00:42:55.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.773 "hdgst": ${hdgst:-false}, 00:42:55.773 "ddgst": ${ddgst:-false} 00:42:55.773 }, 00:42:55.773 "method": "bdev_nvme_attach_controller" 00:42:55.773 } 00:42:55.773 EOF 00:42:55.773 )") 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:55.773 [2024-12-09 10:53:40.340352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.340396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:55.773 10:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:55.773 "params": { 00:42:55.773 "name": "Nvme1", 00:42:55.773 "trtype": "tcp", 00:42:55.773 "traddr": "10.0.0.2", 00:42:55.773 "adrfam": "ipv4", 00:42:55.773 "trsvcid": "4420", 00:42:55.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:55.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:55.773 "hdgst": false, 00:42:55.773 "ddgst": false 00:42:55.773 }, 00:42:55.773 "method": "bdev_nvme_attach_controller" 00:42:55.773 }' 00:42:55.773 [2024-12-09 10:53:40.348248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.348272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.356247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.356270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.364271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.364295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.372247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.372268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.380253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.380274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.388247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.388270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.396247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.396270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.399033] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:42:55.773 [2024-12-09 10:53:40.399139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288201 ] 00:42:55.773 [2024-12-09 10:53:40.404250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.404274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.412249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.412274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:55.773 [2024-12-09 10:53:40.420252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:55.773 [2024-12-09 10:53:40.420276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.428255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.428295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.436254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.436277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.444249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.444271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.452274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.452298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.460249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.460270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.468249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.468273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.476249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.476271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.482833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.033 [2024-12-09 10:53:40.484249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.484272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.492295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.492331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.500304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.500349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.508251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.508273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.516249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.516271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.524249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.524271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.033 [2024-12-09 10:53:40.532245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.033 [2024-12-09 10:53:40.532266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.540245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.540282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.545014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.034 [2024-12-09 10:53:40.548245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.548265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.556245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.556266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.564302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.564344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.572299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.572342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.580303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.580341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.588309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.588351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.596311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.596358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.604295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.604336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.612269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.612297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.620281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.620318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.628300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.628341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.636303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.636346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.644276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.644307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.652270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.652291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.660823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.660848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.668252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.668277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.676251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.676275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.034 [2024-12-09 10:53:40.684251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.034 [2024-12-09 10:53:40.684273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.692247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.692271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.700244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.700265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.708245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.708265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.716244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.716264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.724243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.724263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.293 [2024-12-09 10:53:40.732250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.293 [2024-12-09 10:53:40.732273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.740249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.740272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.748278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.748301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.756246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.756266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.764252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.764278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.772247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.772271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 Running I/O for 5 seconds... 00:42:56.294 [2024-12-09 10:53:40.788618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.788646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.798921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.798949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.811998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.812050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.821507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.821532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.832976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.833002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.843553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.843579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.858566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.858591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.874362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.874387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.884069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.884095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.895435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.895460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.909247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.909272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.918741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.918768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.930265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.930291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.294 [2024-12-09 10:53:40.945357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.294 [2024-12-09 10:53:40.945383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:40.955932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:40.955959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:40.977835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:40.977862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:40.996885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:40.996911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.016864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.016891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.036614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.036683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.055879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.055905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.075060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.075129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.093993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.094034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.113479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.113548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.132794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.132822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.151815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.151842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.170123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.170191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.553 [2024-12-09 10:53:41.190066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.553 [2024-12-09 10:53:41.190138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.208004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.208046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.225509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.225578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.246838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.246864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.264101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.264174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.284605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.284674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.305964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.306032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.329796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.329865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.353906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.353937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.375072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.375140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.396696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.396782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.418861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.418893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.440667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.440751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:56.812 [2024-12-09 10:53:41.462206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:56.812 [2024-12-09 10:53:41.462237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.483327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.483418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.505483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.505551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.527596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.527665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.550410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.550478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.571875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.571907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.592861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.592903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.071 [2024-12-09 10:53:41.613753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.071 [2024-12-09 10:53:41.613794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.072 [2024-12-09 10:53:41.631002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.072 [2024-12-09 10:53:41.631069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.072 [2024-12-09 10:53:41.653076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.072 [2024-12-09 10:53:41.653144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.072 [2024-12-09 10:53:41.672179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.072 [2024-12-09 10:53:41.672248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.072 [2024-12-09 10:53:41.692752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.072 [2024-12-09 10:53:41.692800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.072 [2024-12-09 10:53:41.714488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.072 [2024-12-09 10:53:41.714556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.736774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.736807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.757627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.757696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.779792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.779823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 7162.00 IOPS, 55.95 MiB/s [2024-12-09T09:53:41.984Z] [2024-12-09 10:53:41.803010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.803066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.824180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.824247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.845421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.845490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.866602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.866670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.888205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.888272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.909002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.330 [2024-12-09 10:53:41.909052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.330 [2024-12-09 10:53:41.937093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.331 [2024-12-09 10:53:41.937161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.331 [2024-12-09 10:53:41.956843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.331 [2024-12-09 10:53:41.956874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.331 [2024-12-09 10:53:41.978816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.331 [2024-12-09 10:53:41.978847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:41.999644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:41.999747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.020257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.020326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.039879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.039910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.051910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.051942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.064255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.064289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.076432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.076463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.087527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.087558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.100246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.100278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.112502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.112533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.130241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.130309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.151928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.151959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.173858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.173889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.195203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.195274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.217419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.217487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.590 [2024-12-09 10:53:42.239258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.590 [2024-12-09 10:53:42.239326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.260920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.260952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.282619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.282689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.304737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.304793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.326098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.326166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.349567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.349656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.372009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.372077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.391840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.391871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.414942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.414973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.435900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.435931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.458198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.458267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.479801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.479836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:57.850 [2024-12-09 10:53:42.501434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:57.850 [2024-12-09 10:53:42.501503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.523191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.523281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.545216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.545285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.566846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.566876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.587702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.587787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.607041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.607109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.628949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.628981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.650610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.650680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.671544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.671611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.695676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.695774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.716917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.716949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.738030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.738099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.109 [2024-12-09 10:53:42.759011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.109 [2024-12-09 10:53:42.759043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.779324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.779394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 6707.50 IOPS, 52.40 MiB/s [2024-12-09T09:53:43.023Z] [2024-12-09 10:53:42.800455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.800524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.822861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.822893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.843897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.843928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.865847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.865879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.886971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.887002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.908435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.908506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.929968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.930039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.951851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.951882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.973846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.973877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:42.994056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:42.994123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.369 [2024-12-09 10:53:43.015958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.369 [2024-12-09 10:53:43.015989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.036442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.036511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.057897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.057929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.076842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.076873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.099051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.099120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.119907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.119939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.143154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.143222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.165569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.165636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.187717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.187792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.209125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.209193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.230235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.230304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.251812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.251843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.628 [2024-12-09 10:53:43.272676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.628 [2024-12-09 10:53:43.272773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.291675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.291774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.312783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.312814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.334290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.334360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.356126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.356200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.378095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.378163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.400858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.400890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.421930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.421961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.443603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.443670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.463853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.463884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.486888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.486918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.508824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.508858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:58.887 [2024-12-09 10:53:43.529695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:58.887 [2024-12-09 10:53:43.529781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.550778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.550817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.571814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.571844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.592236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.592303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.613982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.614047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.635846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.635913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.658910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.658944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.677837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.677868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.699498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.699565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.721054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.721120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.743967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.744050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.764518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.764585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.146 [2024-12-09 10:53:43.786772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.146 [2024-12-09 10:53:43.786802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 6457.33 IOPS, 50.45 MiB/s [2024-12-09T09:53:44.060Z] [2024-12-09 10:53:43.808344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.808411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.831837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.831904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.853889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.853918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.875873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.875903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.897789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.897819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.919880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.919911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.941355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.941424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.962574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.962659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:43.985487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:43.985555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:44.008448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:44.008517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:44.029812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:44.029843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.406 [2024-12-09 10:53:44.051625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.406 [2024-12-09 10:53:44.051692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.074235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.074305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.097185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.097253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.118846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.118876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.140139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.140212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.161635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.161702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.182692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.182782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.205334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.205402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.227608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.227675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.248845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.248876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.270675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.270774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.292154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.292229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.666 [2024-12-09 10:53:44.312797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.666 [2024-12-09 10:53:44.312828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.934 [2024-12-09 10:53:44.331884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.934 [2024-12-09 10:53:44.331915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.934 [2024-12-09 10:53:44.351888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.934 [2024-12-09 10:53:44.351918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.934 [2024-12-09 10:53:44.373538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.934 [2024-12-09 10:53:44.373623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.934 [2024-12-09 10:53:44.394686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.934 [2024-12-09 10:53:44.394779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.934 [2024-12-09 10:53:44.416459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.934 [2024-12-09 10:53:44.416526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.437957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.437988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.458944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.458974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.480038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.480107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.500967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.500997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.522466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.522533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.545022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.545091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.566786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.566816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:59.935 [2024-12-09 10:53:44.588434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:59.935 [2024-12-09 10:53:44.588502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.609503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.609570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.630859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.630889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.646825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.646855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.668093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.668162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.686903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.686934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.710122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.710189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.731465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.731534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.752583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.752651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.775271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.775338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 6317.50 IOPS, 49.36 MiB/s [2024-12-09T09:53:44.852Z] [2024-12-09 10:53:44.798297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.798364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.820127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.820202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.198 [2024-12-09 10:53:44.842000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.198 [2024-12-09 10:53:44.842067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.864314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.864381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.885389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.885456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.907845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.907876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.930059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.930127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.950814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.950844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.970422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.970487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:44.992046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:44.992112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:45.012889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:45.012918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:45.034185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:45.034251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:45.055812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:45.055841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:45.079333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:45.079400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.457 [2024-12-09 10:53:45.101202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.457 [2024-12-09 10:53:45.101271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.127785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.127816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.146843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.146873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.170378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.170446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.192100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.192174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.213704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.213796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.234872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.234902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.254830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.254861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.276310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.276377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.297398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.297465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.320227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.320293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.340469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.340537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.715 [2024-12-09 10:53:45.362305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.715 [2024-12-09 10:53:45.362374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.383643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.383710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.404521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.404589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.427356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.427423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.449815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.449845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.472819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.472849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.494840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.494870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.517367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.517435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.973 [2024-12-09 10:53:45.540340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.973 [2024-12-09 10:53:45.540407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.974 [2024-12-09 10:53:45.561937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.974 [2024-12-09 10:53:45.561967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.974 [2024-12-09 10:53:45.585407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.974 [2024-12-09 10:53:45.585473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.974 [2024-12-09 10:53:45.606825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.974 [2024-12-09 10:53:45.606855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:00.974 [2024-12-09 10:53:45.627796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:00.974 [2024-12-09 10:53:45.627827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.648941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.648971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.670358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.670425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.692425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.692458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.713039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.713106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.735204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.735271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.759173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.759242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.781795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.781825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 6216.40 IOPS, 48.57 MiB/s [2024-12-09T09:53:45.888Z] [2024-12-09 10:53:45.803628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.803695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 00:43:01.234 Latency(us) 00:43:01.234 [2024-12-09T09:53:45.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:01.234 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:01.234 Nvme1n1 : 5.02 6221.99 48.61 0.00 0.00 20530.62 3021.94 37282.70 00:43:01.234 [2024-12-09T09:53:45.888Z] =================================================================================================================== 00:43:01.234 [2024-12-09T09:53:45.888Z] Total : 6221.99 48.61 0.00 0.00 20530.62 3021.94 37282.70 00:43:01.234 [2024-12-09 10:53:45.812343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.812404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.824400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.824463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.832389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.832445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.840415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.840479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.848442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.848517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.856442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.856546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.864437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.864512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.872429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.872501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.234 [2024-12-09 10:53:45.880432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.234 [2024-12-09 10:53:45.880504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.888458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.888545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.896448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.896526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.904439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.904516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.912427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.912499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.920428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.920501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.928433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.497 [2024-12-09 10:53:45.928508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.497 [2024-12-09 10:53:45.936435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.936510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.944438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.944515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.952432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.952507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.960439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.960515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.968446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.968519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.976422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.976493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.984427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.984500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:45.992427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:45.992497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.000430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.000503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.008372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.008449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.016379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.016431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.024261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.024285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.032261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.032285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.040267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.040340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.048279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.048335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.056263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.056286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.064433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.064505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.072424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.072497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.080428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.080502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.088266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.088290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.096264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.096287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.104262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.104285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.112373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.112424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.120279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.120331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 [2024-12-09 10:53:46.128376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:01.498 [2024-12-09 10:53:46.128428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:01.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2288201) - No such process 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2288201 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.498 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:01.907 delay0 00:43:01.907 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.907 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:01.907 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.907 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:01.908 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.908 10:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:01.908 [2024-12-09 10:53:46.282462] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:10.032 Initializing NVMe Controllers 00:43:10.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:10.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:10.032 Initialization complete. Launching workers. 00:43:10.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 228, failed: 11877 00:43:10.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11971, failed to submit 134 00:43:10.033 success 11888, unsuccessful 83, failed 0 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:10.033 rmmod nvme_tcp 00:43:10.033 rmmod nvme_fabrics 00:43:10.033 rmmod nvme_keyring 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2286881 ']' 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2286881 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2286881 ']' 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2286881 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286881 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286881' 00:43:10.033 killing process with pid 2286881 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2286881 00:43:10.033 10:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2286881 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:10.033 10:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:11.951 00:43:11.951 real 0m31.327s 00:43:11.951 user 0m42.096s 00:43:11.951 sys 0m12.124s 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:11.951 ************************************ 00:43:11.951 END TEST nvmf_zcopy 00:43:11.951 ************************************ 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:11.951 ************************************ 00:43:11.951 START TEST nvmf_nmic 00:43:11.951 ************************************ 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:11.951 * Looking for test storage... 00:43:11.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:11.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.951 --rc genhtml_branch_coverage=1 00:43:11.951 --rc genhtml_function_coverage=1 00:43:11.951 --rc genhtml_legend=1 00:43:11.951 --rc geninfo_all_blocks=1 00:43:11.951 --rc geninfo_unexecuted_blocks=1 00:43:11.951 00:43:11.951 ' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:11.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.951 --rc genhtml_branch_coverage=1 00:43:11.951 --rc genhtml_function_coverage=1 00:43:11.951 --rc genhtml_legend=1 00:43:11.951 --rc geninfo_all_blocks=1 00:43:11.951 --rc geninfo_unexecuted_blocks=1 00:43:11.951 00:43:11.951 ' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:11.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.951 --rc genhtml_branch_coverage=1 00:43:11.951 --rc genhtml_function_coverage=1 00:43:11.951 --rc genhtml_legend=1 00:43:11.951 --rc geninfo_all_blocks=1 00:43:11.951 --rc geninfo_unexecuted_blocks=1 00:43:11.951 00:43:11.951 ' 00:43:11.951 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:11.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:11.951 --rc genhtml_branch_coverage=1 00:43:11.951 --rc genhtml_function_coverage=1 00:43:11.951 --rc genhtml_legend=1 00:43:11.951 --rc geninfo_all_blocks=1 00:43:11.951 --rc geninfo_unexecuted_blocks=1 00:43:11.951 00:43:11.951 ' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:11.952 10:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:15.251 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:15.251 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:15.251 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:15.251 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:15.251 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:15.252 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:15.252 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:15.252 Found net devices under 0000:84:00.0: cvl_0_0 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:15.252 Found net devices under 0000:84:00.1: cvl_0_1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:15.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:15.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:43:15.252 00:43:15.252 --- 10.0.0.2 ping statistics --- 00:43:15.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:15.252 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:15.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:15.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:43:15.252 00:43:15.252 --- 10.0.0.1 ping statistics --- 00:43:15.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:15.252 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:43:15.252 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2291833 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2291833 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2291833 ']' 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:15.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:15.253 10:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:15.253 [2024-12-09 10:53:59.839940] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:15.253 [2024-12-09 10:53:59.841265] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:43:15.253 [2024-12-09 10:53:59.841337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:15.512 [2024-12-09 10:53:59.978032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:15.512 [2024-12-09 10:54:00.099972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:15.512 [2024-12-09 10:54:00.100086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:15.512 [2024-12-09 10:54:00.100125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:15.512 [2024-12-09 10:54:00.100156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:15.512 [2024-12-09 10:54:00.100194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:15.512 [2024-12-09 10:54:00.103783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:15.512 [2024-12-09 10:54:00.103844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:15.512 [2024-12-09 10:54:00.103954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:15.512 [2024-12-09 10:54:00.103958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.772 [2024-12-09 10:54:00.274809] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:15.772 [2024-12-09 10:54:00.275027] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:15.772 [2024-12-09 10:54:00.275448] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:15.772 [2024-12-09 10:54:00.276056] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:15.772 [2024-12-09 10:54:00.276610] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:15.772 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:15.772 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:43:15.772 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:15.772 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:15.772 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:15.773 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:15.773 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:15.773 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.773 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:15.773 [2024-12-09 10:54:00.401300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.033 Malloc0 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.033 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 [2024-12-09 10:54:00.501348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:16.034 test case1: single bdev can't be used in multiple subsystems 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 [2024-12-09 10:54:00.525038] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:16.034 [2024-12-09 10:54:00.525072] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:16.034 [2024-12-09 10:54:00.525090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:16.034 request: 00:43:16.034 { 00:43:16.034 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:16.034 "namespace": { 00:43:16.034 "bdev_name": "Malloc0", 00:43:16.034 "no_auto_visible": false, 00:43:16.034 "hide_metadata": false 00:43:16.034 }, 00:43:16.034 "method": "nvmf_subsystem_add_ns", 00:43:16.034 "req_id": 1 00:43:16.034 } 00:43:16.034 Got JSON-RPC error response 00:43:16.034 response: 00:43:16.034 { 00:43:16.034 "code": -32602, 00:43:16.034 "message": "Invalid parameters" 00:43:16.034 } 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:16.034 Adding namespace failed - expected result. 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:16.034 test case2: host connect to nvmf target in multiple paths 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:16.034 [2024-12-09 10:54:00.533145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.034 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:16.294 10:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:16.554 10:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:16.554 10:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:43:16.554 10:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:16.554 10:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:16.554 10:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:43:18.461 10:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:18.461 [global] 00:43:18.461 thread=1 00:43:18.461 invalidate=1 00:43:18.461 rw=write 00:43:18.461 time_based=1 00:43:18.461 runtime=1 00:43:18.461 ioengine=libaio 00:43:18.461 direct=1 00:43:18.461 bs=4096 00:43:18.461 iodepth=1 00:43:18.461 norandommap=0 00:43:18.461 numjobs=1 00:43:18.461 00:43:18.461 verify_dump=1 00:43:18.461 verify_backlog=512 00:43:18.461 verify_state_save=0 00:43:18.461 do_verify=1 00:43:18.461 verify=crc32c-intel 00:43:18.461 [job0] 00:43:18.461 filename=/dev/nvme0n1 00:43:18.461 Could not set queue depth (nvme0n1) 00:43:18.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:18.721 fio-3.35 00:43:18.721 Starting 1 thread 00:43:20.098 00:43:20.098 job0: (groupid=0, jobs=1): err= 0: pid=2292453: Mon Dec 9 10:54:04 2024 00:43:20.098 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:43:20.098 slat (nsec): min=9231, max=28333, avg=18441.77, stdev=4191.60 00:43:20.098 clat (usec): min=40862, max=41220, avg=40988.45, stdev=67.51 00:43:20.098 lat (usec): min=40889, max=41229, avg=41006.89, stdev=64.55 00:43:20.098 clat percentiles (usec): 00:43:20.098 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:20.098 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:20.098 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:20.098 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:20.098 | 99.99th=[41157] 00:43:20.098 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:43:20.098 slat (usec): min=9, max=30188, avg=70.32, stdev=1333.66 00:43:20.098 clat (usec): min=148, max=275, avg=163.99, stdev=15.11 00:43:20.098 lat (usec): min=158, max=30423, avg=234.30, stdev=1336.87 00:43:20.098 clat percentiles (usec): 00:43:20.098 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 155], 00:43:20.098 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:43:20.098 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 188], 00:43:20.098 | 99.00th=[ 235], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 277], 00:43:20.098 | 99.99th=[ 277] 00:43:20.098 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:43:20.098 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:20.098 lat (usec) : 250=95.32%, 500=0.56% 00:43:20.098 lat (msec) : 50=4.12% 00:43:20.098 cpu : usr=0.20%, sys=0.98%, ctx=536, majf=0, minf=1 00:43:20.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:20.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.098 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:20.098 00:43:20.098 Run status group 0 (all jobs): 00:43:20.099 READ: bw=85.9KiB/s (88.0kB/s), 85.9KiB/s-85.9KiB/s (88.0kB/s-88.0kB/s), io=88.0KiB (90.1kB), run=1024-1024msec 00:43:20.099 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:43:20.099 00:43:20.099 Disk stats (read/write): 00:43:20.099 nvme0n1: ios=44/512, merge=0/0, ticks=1724/81, in_queue=1805, util=98.70% 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:20.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:20.099 rmmod nvme_tcp 00:43:20.099 rmmod nvme_fabrics 00:43:20.099 rmmod nvme_keyring 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2291833 ']' 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2291833 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2291833 ']' 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2291833 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.099 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2291833 00:43:20.357 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:20.357 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:20.357 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2291833' 00:43:20.357 killing process with pid 2291833 00:43:20.357 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2291833 00:43:20.357 10:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2291833 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:20.616 10:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:23.154 00:43:23.154 real 0m11.009s 00:43:23.154 user 0m18.872s 00:43:23.154 sys 0m4.387s 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:23.154 ************************************ 00:43:23.154 END TEST nvmf_nmic 00:43:23.154 ************************************ 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:23.154 ************************************ 00:43:23.154 START TEST nvmf_fio_target 00:43:23.154 ************************************ 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:23.154 * Looking for test storage... 00:43:23.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:23.154 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.155 --rc genhtml_branch_coverage=1 00:43:23.155 --rc genhtml_function_coverage=1 00:43:23.155 --rc genhtml_legend=1 00:43:23.155 --rc geninfo_all_blocks=1 00:43:23.155 --rc geninfo_unexecuted_blocks=1 00:43:23.155 00:43:23.155 ' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.155 --rc genhtml_branch_coverage=1 00:43:23.155 --rc genhtml_function_coverage=1 00:43:23.155 --rc genhtml_legend=1 00:43:23.155 --rc geninfo_all_blocks=1 00:43:23.155 --rc geninfo_unexecuted_blocks=1 00:43:23.155 00:43:23.155 ' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.155 --rc genhtml_branch_coverage=1 00:43:23.155 --rc genhtml_function_coverage=1 00:43:23.155 --rc genhtml_legend=1 00:43:23.155 --rc geninfo_all_blocks=1 00:43:23.155 --rc geninfo_unexecuted_blocks=1 00:43:23.155 00:43:23.155 ' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:23.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:23.155 --rc genhtml_branch_coverage=1 00:43:23.155 --rc genhtml_function_coverage=1 00:43:23.155 --rc genhtml_legend=1 00:43:23.155 --rc geninfo_all_blocks=1 00:43:23.155 --rc geninfo_unexecuted_blocks=1 00:43:23.155 00:43:23.155 ' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:23.155 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:23.156 10:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:26.461 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:26.462 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:26.462 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:26.462 Found net devices under 0000:84:00.0: cvl_0_0 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:26.462 Found net devices under 0000:84:00.1: cvl_0_1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:26.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:26.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:43:26.462 00:43:26.462 --- 10.0.0.2 ping statistics --- 00:43:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.462 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:43:26.462 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:26.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:26.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:43:26.462 00:43:26.462 --- 10.0.0.1 ping statistics --- 00:43:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:26.463 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2295168 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2295168 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2295168 ']' 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:26.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:26.463 10:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.463 [2024-12-09 10:54:10.857634] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:26.463 [2024-12-09 10:54:10.859215] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:43:26.463 [2024-12-09 10:54:10.859291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:26.463 [2024-12-09 10:54:11.008012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:26.724 [2024-12-09 10:54:11.127784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:26.724 [2024-12-09 10:54:11.127846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:26.724 [2024-12-09 10:54:11.127864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:26.724 [2024-12-09 10:54:11.127878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:26.724 [2024-12-09 10:54:11.127891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:26.724 [2024-12-09 10:54:11.129794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:26.725 [2024-12-09 10:54:11.129860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:26.725 [2024-12-09 10:54:11.129864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:26.725 [2024-12-09 10:54:11.129831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:26.725 [2024-12-09 10:54:11.302252] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:26.725 [2024-12-09 10:54:11.302836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:26.725 [2024-12-09 10:54:11.303062] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:26.725 [2024-12-09 10:54:11.303999] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:26.725 [2024-12-09 10:54:11.304478] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:26.985 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:27.245 [2024-12-09 10:54:11.850903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:27.246 10:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:28.188 10:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:28.188 10:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:28.450 10:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:28.450 10:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:29.020 10:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:29.020 10:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:29.591 10:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:29.591 10:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:30.162 10:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:31.099 10:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:31.099 10:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:31.358 10:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:31.358 10:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:31.928 10:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:31.928 10:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:32.498 10:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:33.440 10:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:33.440 10:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:34.009 10:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:34.009 10:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:34.580 10:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:35.150 [2024-12-09 10:54:19.627071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:35.150 10:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:35.407 10:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:35.976 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:36.236 10:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:38.146 10:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:38.146 [global] 00:43:38.146 thread=1 00:43:38.146 invalidate=1 00:43:38.146 rw=write 00:43:38.146 time_based=1 00:43:38.146 runtime=1 00:43:38.146 ioengine=libaio 00:43:38.146 direct=1 00:43:38.146 bs=4096 00:43:38.146 iodepth=1 00:43:38.146 norandommap=0 00:43:38.146 numjobs=1 00:43:38.146 00:43:38.146 verify_dump=1 00:43:38.146 verify_backlog=512 00:43:38.146 verify_state_save=0 00:43:38.146 do_verify=1 00:43:38.146 verify=crc32c-intel 00:43:38.146 [job0] 00:43:38.146 filename=/dev/nvme0n1 00:43:38.146 [job1] 00:43:38.146 filename=/dev/nvme0n2 00:43:38.146 [job2] 00:43:38.146 filename=/dev/nvme0n3 00:43:38.146 [job3] 00:43:38.146 filename=/dev/nvme0n4 00:43:38.146 Could not set queue depth (nvme0n1) 00:43:38.146 Could not set queue depth (nvme0n2) 00:43:38.146 Could not set queue depth (nvme0n3) 00:43:38.146 Could not set queue depth (nvme0n4) 00:43:38.405 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:38.405 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:38.405 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:38.405 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:38.405 fio-3.35 00:43:38.405 Starting 4 threads 00:43:39.790 00:43:39.790 job0: (groupid=0, jobs=1): err= 0: pid=2296629: Mon Dec 9 10:54:24 2024 00:43:39.790 read: IOPS=982, BW=3931KiB/s (4025kB/s)(4076KiB/1037msec) 00:43:39.790 slat (nsec): min=6172, max=64785, avg=14810.28, stdev=9100.03 00:43:39.790 clat (usec): min=205, max=41274, avg=747.81, stdev=4390.60 00:43:39.790 lat (usec): min=213, max=41286, avg=762.62, stdev=4391.32 00:43:39.790 clat percentiles (usec): 00:43:39.790 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:43:39.790 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:43:39.790 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 343], 95.00th=[ 392], 00:43:39.790 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:39.790 | 99.99th=[41157] 00:43:39.790 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:43:39.790 slat (usec): min=8, max=40732, avg=62.64, stdev=1318.35 00:43:39.790 clat (usec): min=142, max=3275, avg=179.28, stdev=109.79 00:43:39.790 lat (usec): min=152, max=41021, avg=241.92, stdev=1327.01 00:43:39.790 clat percentiles (usec): 00:43:39.790 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:43:39.790 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:43:39.790 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 210], 95.00th=[ 219], 00:43:39.790 | 99.00th=[ 247], 99.50th=[ 289], 99.90th=[ 1532], 99.95th=[ 3261], 00:43:39.790 | 99.99th=[ 3261] 00:43:39.790 bw ( KiB/s): min= 8192, max= 8192, per=82.96%, avg=8192.00, stdev= 0.00, samples=1 00:43:39.790 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:39.790 lat (usec) : 250=72.44%, 500=26.82%, 1000=0.05% 00:43:39.790 lat (msec) : 2=0.05%, 4=0.05%, 50=0.59% 00:43:39.790 cpu : usr=1.54%, sys=2.99%, ctx=2049, majf=0, minf=1 00:43:39.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.790 issued rwts: total=1019,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:39.790 job1: (groupid=0, jobs=1): err= 0: pid=2296630: Mon Dec 9 10:54:24 2024 00:43:39.790 read: IOPS=25, BW=103KiB/s (106kB/s)(104KiB/1005msec) 00:43:39.790 slat (nsec): min=7537, max=41327, avg=23977.42, stdev=9918.46 00:43:39.790 clat (usec): min=225, max=42082, avg=33182.56, stdev=16372.82 00:43:39.790 lat (usec): min=243, max=42096, avg=33206.54, stdev=16377.57 00:43:39.790 clat percentiles (usec): 00:43:39.790 | 1.00th=[ 227], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[40633], 00:43:39.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:39.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:39.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:39.790 | 99.99th=[42206] 00:43:39.790 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:43:39.790 slat (usec): min=10, max=11063, avg=35.77, stdev=488.35 00:43:39.790 clat (usec): min=183, max=332, avg=230.12, stdev=19.42 00:43:39.790 lat (usec): min=199, max=11396, avg=265.90, stdev=493.22 00:43:39.790 clat percentiles (usec): 00:43:39.790 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:43:39.790 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 237], 00:43:39.790 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 255], 00:43:39.790 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 334], 99.95th=[ 334], 00:43:39.790 | 99.99th=[ 334] 00:43:39.790 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:43:39.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:39.790 lat (usec) : 250=89.03%, 500=7.06% 00:43:39.790 lat (msec) : 50=3.90% 00:43:39.790 cpu : usr=0.30%, sys=1.10%, ctx=540, majf=0, minf=1 00:43:39.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.790 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:39.790 job2: (groupid=0, jobs=1): err= 0: pid=2296631: Mon Dec 9 10:54:24 2024 00:43:39.790 read: IOPS=29, BW=118KiB/s (121kB/s)(120KiB/1016msec) 00:43:39.790 slat (nsec): min=10526, max=48124, avg=19341.90, stdev=8479.23 00:43:39.790 clat (usec): min=403, max=41046, avg=30122.47, stdev=18176.88 00:43:39.790 lat (usec): min=437, max=41065, avg=30141.81, stdev=18171.91 00:43:39.790 clat percentiles (usec): 00:43:39.790 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 515], 00:43:39.790 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:43:39.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:39.790 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:39.790 | 99.99th=[41157] 00:43:39.790 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:43:39.790 slat (nsec): min=9022, max=47337, avg=11843.27, stdev=4133.14 00:43:39.790 clat (usec): min=167, max=337, avg=202.65, stdev=21.52 00:43:39.791 lat (usec): min=177, max=350, avg=214.50, stdev=22.79 00:43:39.791 clat percentiles (usec): 00:43:39.791 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:43:39.791 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:43:39.791 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 237], 00:43:39.791 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 338], 99.95th=[ 338], 00:43:39.791 | 99.99th=[ 338] 00:43:39.791 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:43:39.791 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:39.791 lat (usec) : 250=92.44%, 500=2.77%, 750=0.74% 00:43:39.791 lat (msec) : 50=4.06% 00:43:39.791 cpu : usr=0.59%, sys=0.59%, ctx=542, majf=0, minf=2 00:43:39.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.791 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:39.791 job3: (groupid=0, jobs=1): err= 0: pid=2296632: Mon Dec 9 10:54:24 2024 00:43:39.791 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:43:39.791 slat (nsec): min=8894, max=38541, avg=20952.43, stdev=6768.62 00:43:39.791 clat (usec): min=40735, max=41991, avg=41018.12, stdev=237.45 00:43:39.791 lat (usec): min=40744, max=42022, avg=41039.07, stdev=240.83 00:43:39.791 clat percentiles (usec): 00:43:39.791 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:39.791 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:39.791 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:39.791 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:39.791 | 99.99th=[42206] 00:43:39.791 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:43:39.791 slat (usec): min=8, max=40739, avg=115.35, stdev=1876.72 00:43:39.791 clat (usec): min=156, max=422, avg=214.98, stdev=35.61 00:43:39.791 lat (usec): min=166, max=40936, avg=330.32, stdev=1876.44 00:43:39.791 clat percentiles (usec): 00:43:39.791 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 178], 00:43:39.791 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:43:39.791 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 281], 00:43:39.791 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 424], 99.95th=[ 424], 00:43:39.791 | 99.99th=[ 424] 00:43:39.791 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:43:39.791 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:39.791 lat (usec) : 250=82.18%, 500=13.88% 00:43:39.791 lat (msec) : 50=3.94% 00:43:39.791 cpu : usr=0.10%, sys=0.77%, ctx=536, majf=0, minf=1 00:43:39.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.791 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:39.791 00:43:39.791 Run status group 0 (all jobs): 00:43:39.791 READ: bw=4228KiB/s (4329kB/s), 81.1KiB/s-3931KiB/s (83.0kB/s-4025kB/s), io=4384KiB (4489kB), run=1005-1037msec 00:43:39.791 WRITE: bw=9875KiB/s (10.1MB/s), 1977KiB/s-3950KiB/s (2024kB/s-4045kB/s), io=10.0MiB (10.5MB), run=1005-1037msec 00:43:39.791 00:43:39.791 Disk stats (read/write): 00:43:39.791 nvme0n1: ios=1039/1024, merge=0/0, ticks=1455/178, in_queue=1633, util=91.68% 00:43:39.791 nvme0n2: ios=72/512, merge=0/0, ticks=953/115, in_queue=1068, util=93.07% 00:43:39.791 nvme0n3: ios=79/512, merge=0/0, ticks=732/103, in_queue=835, util=90.23% 00:43:39.791 nvme0n4: ios=40/512, merge=0/0, ticks=1602/110, in_queue=1712, util=100.00% 00:43:39.791 10:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:39.791 [global] 00:43:39.791 thread=1 00:43:39.791 invalidate=1 00:43:39.791 rw=randwrite 00:43:39.791 time_based=1 00:43:39.791 runtime=1 00:43:39.791 ioengine=libaio 00:43:39.791 direct=1 00:43:39.791 bs=4096 00:43:39.791 iodepth=1 00:43:39.791 norandommap=0 00:43:39.791 numjobs=1 00:43:39.791 00:43:39.791 verify_dump=1 00:43:39.791 verify_backlog=512 00:43:39.791 verify_state_save=0 00:43:39.791 do_verify=1 00:43:39.791 verify=crc32c-intel 00:43:39.791 [job0] 00:43:39.791 filename=/dev/nvme0n1 00:43:39.791 [job1] 00:43:39.791 filename=/dev/nvme0n2 00:43:39.791 [job2] 00:43:39.791 filename=/dev/nvme0n3 00:43:39.791 [job3] 00:43:39.791 filename=/dev/nvme0n4 00:43:39.791 Could not set queue depth (nvme0n1) 00:43:39.791 Could not set queue depth (nvme0n2) 00:43:39.791 Could not set queue depth (nvme0n3) 00:43:39.791 Could not set queue depth (nvme0n4) 00:43:40.049 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:40.049 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:40.049 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:40.049 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:40.049 fio-3.35 00:43:40.049 Starting 4 threads 00:43:41.426 00:43:41.426 job0: (groupid=0, jobs=1): err= 0: pid=2296862: Mon Dec 9 10:54:25 2024 00:43:41.426 read: IOPS=1891, BW=7564KiB/s (7746kB/s)(7572KiB/1001msec) 00:43:41.426 slat (nsec): min=5523, max=69573, avg=12291.45, stdev=5996.76 00:43:41.426 clat (usec): min=209, max=817, avg=295.07, stdev=94.65 00:43:41.426 lat (usec): min=216, max=828, avg=307.36, stdev=97.53 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:43:41.426 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 273], 00:43:41.426 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 449], 95.00th=[ 537], 00:43:41.426 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 816], 00:43:41.426 | 99.99th=[ 816] 00:43:41.426 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:41.426 slat (nsec): min=7389, max=95022, avg=11244.73, stdev=6599.07 00:43:41.426 clat (usec): min=140, max=1028, avg=186.28, stdev=48.92 00:43:41.426 lat (usec): min=149, max=1040, avg=197.53, stdev=52.13 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:43:41.426 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:43:41.426 | 70.00th=[ 178], 80.00th=[ 223], 90.00th=[ 249], 95.00th=[ 269], 00:43:41.426 | 99.00th=[ 375], 99.50th=[ 420], 99.90th=[ 474], 99.95th=[ 474], 00:43:41.426 | 99.99th=[ 1029] 00:43:41.426 bw ( KiB/s): min= 8192, max= 8192, per=35.48%, avg=8192.00, stdev= 0.00, samples=1 00:43:41.426 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:41.426 lat (usec) : 250=69.88%, 500=26.92%, 750=3.15%, 1000=0.03% 00:43:41.426 lat (msec) : 2=0.03% 00:43:41.426 cpu : usr=2.50%, sys=4.50%, ctx=3943, majf=0, minf=1 00:43:41.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 issued rwts: total=1893,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:41.426 job1: (groupid=0, jobs=1): err= 0: pid=2296869: Mon Dec 9 10:54:25 2024 00:43:41.426 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:43:41.426 slat (nsec): min=5243, max=51613, avg=12080.33, stdev=6282.27 00:43:41.426 clat (usec): min=202, max=12685, avg=369.80, stdev=325.98 00:43:41.426 lat (usec): min=220, max=12703, avg=381.88, stdev=326.42 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 285], 00:43:41.426 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 367], 00:43:41.426 | 70.00th=[ 392], 80.00th=[ 433], 90.00th=[ 474], 95.00th=[ 529], 00:43:41.426 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 783], 99.95th=[12649], 00:43:41.426 | 99.99th=[12649] 00:43:41.426 write: IOPS=1628, BW=6513KiB/s (6670kB/s)(6520KiB/1001msec); 0 zone resets 00:43:41.426 slat (nsec): min=7014, max=62277, avg=11367.01, stdev=4680.27 00:43:41.426 clat (usec): min=143, max=480, avg=235.17, stdev=47.32 00:43:41.426 lat (usec): min=152, max=499, avg=246.53, stdev=48.49 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 190], 20.00th=[ 206], 00:43:41.426 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:43:41.426 | 70.00th=[ 241], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 330], 00:43:41.426 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 482], 00:43:41.426 | 99.99th=[ 482] 00:43:41.426 bw ( KiB/s): min= 8192, max= 8192, per=35.48%, avg=8192.00, stdev= 0.00, samples=1 00:43:41.426 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:41.426 lat (usec) : 250=40.18%, 500=56.44%, 750=3.28%, 1000=0.06% 00:43:41.426 lat (msec) : 20=0.03% 00:43:41.426 cpu : usr=2.10%, sys=4.30%, ctx=3166, majf=0, minf=2 00:43:41.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 issued rwts: total=1536,1630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:41.426 job2: (groupid=0, jobs=1): err= 0: pid=2296870: Mon Dec 9 10:54:25 2024 00:43:41.426 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:43:41.426 slat (nsec): min=6254, max=63744, avg=13423.11, stdev=6353.50 00:43:41.426 clat (usec): min=219, max=683, avg=374.88, stdev=77.88 00:43:41.426 lat (usec): min=230, max=718, avg=388.30, stdev=79.16 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 289], 20.00th=[ 302], 00:43:41.426 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 379], 00:43:41.426 | 70.00th=[ 416], 80.00th=[ 445], 90.00th=[ 486], 95.00th=[ 519], 00:43:41.426 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 685], 00:43:41.426 | 99.99th=[ 685] 00:43:41.426 write: IOPS=1661, BW=6645KiB/s (6805kB/s)(6652KiB/1001msec); 0 zone resets 00:43:41.426 slat (nsec): min=7972, max=53674, avg=12786.69, stdev=4356.20 00:43:41.426 clat (usec): min=156, max=448, avg=222.02, stdev=38.19 00:43:41.426 lat (usec): min=171, max=490, avg=234.81, stdev=39.61 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 194], 00:43:41.426 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:43:41.426 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 277], 95.00th=[ 306], 00:43:41.426 | 99.00th=[ 347], 99.50th=[ 388], 99.90th=[ 445], 99.95th=[ 449], 00:43:41.426 | 99.99th=[ 449] 00:43:41.426 bw ( KiB/s): min= 8192, max= 8192, per=35.48%, avg=8192.00, stdev= 0.00, samples=1 00:43:41.426 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:41.426 lat (usec) : 250=44.70%, 500=51.77%, 750=3.53% 00:43:41.426 cpu : usr=1.80%, sys=4.50%, ctx=3201, majf=0, minf=1 00:43:41.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 issued rwts: total=1536,1663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:41.426 job3: (groupid=0, jobs=1): err= 0: pid=2296871: Mon Dec 9 10:54:25 2024 00:43:41.426 read: IOPS=52, BW=209KiB/s (214kB/s)(212KiB/1014msec) 00:43:41.426 slat (nsec): min=8085, max=27776, avg=14777.40, stdev=4215.46 00:43:41.426 clat (usec): min=248, max=41331, avg=16454.17, stdev=20075.27 00:43:41.426 lat (usec): min=257, max=41340, avg=16468.95, stdev=20074.05 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 322], 00:43:41.426 | 30.00th=[ 330], 40.00th=[ 355], 50.00th=[ 416], 60.00th=[ 529], 00:43:41.426 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:41.426 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:41.426 | 99.99th=[41157] 00:43:41.426 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:43:41.426 slat (nsec): min=8185, max=38691, avg=12570.73, stdev=4378.91 00:43:41.426 clat (usec): min=197, max=462, avg=259.53, stdev=38.05 00:43:41.426 lat (usec): min=207, max=475, avg=272.10, stdev=38.41 00:43:41.426 clat percentiles (usec): 00:43:41.426 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 235], 00:43:41.426 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 255], 00:43:41.426 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 343], 00:43:41.426 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 461], 99.95th=[ 461], 00:43:41.426 | 99.99th=[ 461] 00:43:41.426 bw ( KiB/s): min= 4096, max= 4096, per=17.74%, avg=4096.00, stdev= 0.00, samples=1 00:43:41.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:41.426 lat (usec) : 250=50.27%, 500=45.84%, 750=0.18% 00:43:41.426 lat (msec) : 50=3.72% 00:43:41.426 cpu : usr=0.10%, sys=0.99%, ctx=565, majf=0, minf=2 00:43:41.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:41.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:41.426 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:41.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:41.427 00:43:41.427 Run status group 0 (all jobs): 00:43:41.427 READ: bw=19.3MiB/s (20.3MB/s), 209KiB/s-7564KiB/s (214kB/s-7746kB/s), io=19.6MiB (20.6MB), run=1001-1014msec 00:43:41.427 WRITE: bw=22.5MiB/s (23.6MB/s), 2020KiB/s-8184KiB/s (2068kB/s-8380kB/s), io=22.9MiB (24.0MB), run=1001-1014msec 00:43:41.427 00:43:41.427 Disk stats (read/write): 00:43:41.427 nvme0n1: ios=1589/1789, merge=0/0, ticks=1033/328, in_queue=1361, util=89.38% 00:43:41.427 nvme0n2: ios=1161/1536, merge=0/0, ticks=494/345, in_queue=839, util=90.44% 00:43:41.427 nvme0n3: ios=1201/1536, merge=0/0, ticks=1399/335, in_queue=1734, util=97.49% 00:43:41.427 nvme0n4: ios=96/512, merge=0/0, ticks=785/133, in_queue=918, util=94.71% 00:43:41.427 10:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:41.427 [global] 00:43:41.427 thread=1 00:43:41.427 invalidate=1 00:43:41.427 rw=write 00:43:41.427 time_based=1 00:43:41.427 runtime=1 00:43:41.427 ioengine=libaio 00:43:41.427 direct=1 00:43:41.427 bs=4096 00:43:41.427 iodepth=128 00:43:41.427 norandommap=0 00:43:41.427 numjobs=1 00:43:41.427 00:43:41.427 verify_dump=1 00:43:41.427 verify_backlog=512 00:43:41.427 verify_state_save=0 00:43:41.427 do_verify=1 00:43:41.427 verify=crc32c-intel 00:43:41.427 [job0] 00:43:41.427 filename=/dev/nvme0n1 00:43:41.427 [job1] 00:43:41.427 filename=/dev/nvme0n2 00:43:41.427 [job2] 00:43:41.427 filename=/dev/nvme0n3 00:43:41.427 [job3] 00:43:41.427 filename=/dev/nvme0n4 00:43:41.427 Could not set queue depth (nvme0n1) 00:43:41.427 Could not set queue depth (nvme0n2) 00:43:41.427 Could not set queue depth (nvme0n3) 00:43:41.427 Could not set queue depth (nvme0n4) 00:43:41.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:41.427 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:41.427 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:41.427 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:41.427 fio-3.35 00:43:41.427 Starting 4 threads 00:43:42.804 00:43:42.804 job0: (groupid=0, jobs=1): err= 0: pid=2297209: Mon Dec 9 10:54:27 2024 00:43:42.804 read: IOPS=4295, BW=16.8MiB/s (17.6MB/s)(17.5MiB/1044msec) 00:43:42.804 slat (usec): min=2, max=15880, avg=112.41, stdev=742.99 00:43:42.804 clat (usec): min=4475, max=56469, avg=15875.81, stdev=8116.66 00:43:42.804 lat (usec): min=4484, max=56473, avg=15988.22, stdev=8144.98 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:43:42.804 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12911], 60.00th=[14091], 00:43:42.804 | 70.00th=[16319], 80.00th=[18744], 90.00th=[23462], 95.00th=[31065], 00:43:42.804 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:43:42.804 | 99.99th=[56361] 00:43:42.804 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:43:42.804 slat (usec): min=4, max=9340, avg=97.31, stdev=518.18 00:43:42.804 clat (usec): min=7944, max=23440, avg=13143.15, stdev=2288.47 00:43:42.804 lat (usec): min=7956, max=25692, avg=13240.47, stdev=2318.81 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:43:42.804 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:43:42.804 | 70.00th=[13960], 80.00th=[15401], 90.00th=[16450], 95.00th=[17171], 00:43:42.804 | 99.00th=[19006], 99.50th=[20055], 99.90th=[21627], 99.95th=[22414], 00:43:42.804 | 99.99th=[23462] 00:43:42.804 bw ( KiB/s): min=18288, max=18576, per=29.75%, avg=18432.00, stdev=203.65, samples=2 00:43:42.804 iops : min= 4572, max= 4644, avg=4608.00, stdev=50.91, samples=2 00:43:42.804 lat (msec) : 10=5.10%, 20=86.40%, 50=7.81%, 100=0.69% 00:43:42.804 cpu : usr=3.26%, sys=9.20%, ctx=468, majf=0, minf=1 00:43:42.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:42.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:42.804 issued rwts: total=4485,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:42.804 job1: (groupid=0, jobs=1): err= 0: pid=2297210: Mon Dec 9 10:54:27 2024 00:43:42.804 read: IOPS=4735, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1003msec) 00:43:42.804 slat (usec): min=2, max=12434, avg=101.00, stdev=589.59 00:43:42.804 clat (usec): min=1872, max=31543, avg=13385.57, stdev=4466.73 00:43:42.804 lat (usec): min=1876, max=35540, avg=13486.57, stdev=4487.62 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 5407], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10421], 00:43:42.804 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:43:42.804 | 70.00th=[13304], 80.00th=[14484], 90.00th=[19792], 95.00th=[23725], 00:43:42.804 | 99.00th=[28705], 99.50th=[30540], 99.90th=[31589], 99.95th=[31589], 00:43:42.804 | 99.99th=[31589] 00:43:42.804 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:42.804 slat (usec): min=4, max=8641, avg=89.85, stdev=473.33 00:43:42.804 clat (usec): min=887, max=34463, avg=12249.01, stdev=4694.26 00:43:42.804 lat (usec): min=901, max=35123, avg=12338.86, stdev=4728.23 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[ 9634], 00:43:42.804 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11469], 60.00th=[11863], 00:43:42.804 | 70.00th=[12125], 80.00th=[12518], 90.00th=[15401], 95.00th=[23725], 00:43:42.804 | 99.00th=[31851], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:43:42.804 | 99.99th=[34341] 00:43:42.804 bw ( KiB/s): min=16889, max=24104, per=33.08%, avg=20496.50, stdev=5101.78, samples=2 00:43:42.804 iops : min= 4222, max= 6026, avg=5124.00, stdev=1275.62, samples=2 00:43:42.804 lat (usec) : 1000=0.03% 00:43:42.804 lat (msec) : 2=0.08%, 4=0.16%, 10=18.80%, 20=72.54%, 50=8.38% 00:43:42.804 cpu : usr=4.29%, sys=7.09%, ctx=431, majf=0, minf=1 00:43:42.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:42.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:42.804 issued rwts: total=4750,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:42.804 job2: (groupid=0, jobs=1): err= 0: pid=2297211: Mon Dec 9 10:54:27 2024 00:43:42.804 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:43:42.804 slat (usec): min=3, max=15093, avg=131.05, stdev=933.12 00:43:42.804 clat (usec): min=4370, max=44893, avg=17447.33, stdev=6301.61 00:43:42.804 lat (usec): min=4376, max=44900, avg=17578.38, stdev=6377.51 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 8029], 5.00th=[11207], 10.00th=[11469], 20.00th=[12256], 00:43:42.804 | 30.00th=[12911], 40.00th=[14091], 50.00th=[14615], 60.00th=[17171], 00:43:42.804 | 70.00th=[20579], 80.00th=[23725], 90.00th=[26870], 95.00th=[28967], 00:43:42.804 | 99.00th=[34341], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 00:43:42.804 | 99.99th=[44827] 00:43:42.804 write: IOPS=3768, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1006msec); 0 zone resets 00:43:42.804 slat (usec): min=4, max=11269, avg=125.94, stdev=735.57 00:43:42.804 clat (usec): min=923, max=63280, avg=17177.22, stdev=11307.08 00:43:42.804 lat (usec): min=954, max=63295, avg=17303.16, stdev=11387.51 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[ 2507], 5.00th=[ 5669], 10.00th=[ 7767], 20.00th=[10814], 00:43:42.804 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13042], 60.00th=[14222], 00:43:42.804 | 70.00th=[18482], 80.00th=[23725], 90.00th=[26084], 95.00th=[45351], 00:43:42.804 | 99.00th=[60556], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:43:42.804 | 99.99th=[63177] 00:43:42.804 bw ( KiB/s): min= 8824, max=20480, per=23.65%, avg=14652.00, stdev=8242.04, samples=2 00:43:42.804 iops : min= 2206, max= 5120, avg=3663.00, stdev=2060.51, samples=2 00:43:42.804 lat (usec) : 1000=0.01% 00:43:42.804 lat (msec) : 2=0.24%, 4=1.00%, 10=8.68%, 20=61.57%, 50=26.59% 00:43:42.804 lat (msec) : 100=1.90% 00:43:42.804 cpu : usr=2.99%, sys=6.47%, ctx=340, majf=0, minf=1 00:43:42.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:43:42.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:42.804 issued rwts: total=3584,3791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:42.804 job3: (groupid=0, jobs=1): err= 0: pid=2297213: Mon Dec 9 10:54:27 2024 00:43:42.804 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:43:42.804 slat (usec): min=4, max=17576, avg=190.68, stdev=1218.31 00:43:42.804 clat (usec): min=12405, max=48305, avg=23836.39, stdev=8992.92 00:43:42.804 lat (usec): min=12499, max=48322, avg=24027.07, stdev=9051.76 00:43:42.804 clat percentiles (usec): 00:43:42.804 | 1.00th=[12780], 5.00th=[13698], 10.00th=[14746], 20.00th=[15795], 00:43:42.804 | 30.00th=[17171], 40.00th=[18482], 50.00th=[19792], 60.00th=[23462], 00:43:42.804 | 70.00th=[30278], 80.00th=[32113], 90.00th=[37487], 95.00th=[40109], 00:43:42.805 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[46400], 00:43:42.805 | 99.99th=[48497] 00:43:42.805 write: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1003msec); 0 zone resets 00:43:42.805 slat (usec): min=6, max=13402, avg=183.86, stdev=1026.93 00:43:42.805 clat (usec): min=1377, max=59391, avg=24489.23, stdev=8786.97 00:43:42.805 lat (usec): min=7225, max=59415, avg=24673.10, stdev=8820.85 00:43:42.805 clat percentiles (usec): 00:43:42.805 | 1.00th=[ 7701], 5.00th=[13829], 10.00th=[16450], 20.00th=[17695], 00:43:42.805 | 30.00th=[18744], 40.00th=[21103], 50.00th=[23725], 60.00th=[23987], 00:43:42.805 | 70.00th=[27657], 80.00th=[31589], 90.00th=[34341], 95.00th=[38536], 00:43:42.805 | 99.00th=[56361], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:43:42.805 | 99.99th=[59507] 00:43:42.805 bw ( KiB/s): min=10032, max=10496, per=16.56%, avg=10264.00, stdev=328.10, samples=2 00:43:42.805 iops : min= 2508, max= 2624, avg=2566.00, stdev=82.02, samples=2 00:43:42.805 lat (msec) : 2=0.02%, 10=1.32%, 20=42.65%, 50=54.79%, 100=1.21% 00:43:42.805 cpu : usr=2.69%, sys=5.49%, ctx=199, majf=0, minf=1 00:43:42.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:43:42.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:42.805 issued rwts: total=2560,2654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:42.805 00:43:42.805 Run status group 0 (all jobs): 00:43:42.805 READ: bw=57.5MiB/s (60.3MB/s), 9.97MiB/s-18.5MiB/s (10.5MB/s-19.4MB/s), io=60.1MiB (63.0MB), run=1003-1044msec 00:43:42.805 WRITE: bw=60.5MiB/s (63.5MB/s), 10.3MiB/s-19.9MiB/s (10.8MB/s-20.9MB/s), io=63.2MiB (66.2MB), run=1003-1044msec 00:43:42.805 00:43:42.805 Disk stats (read/write): 00:43:42.805 nvme0n1: ios=3627/4006, merge=0/0, ticks=22143/18482, in_queue=40625, util=97.60% 00:43:42.805 nvme0n2: ios=4146/4120, merge=0/0, ticks=18671/18378, in_queue=37049, util=97.87% 00:43:42.805 nvme0n3: ios=3124/3447, merge=0/0, ticks=34144/33081, in_queue=67225, util=97.49% 00:43:42.805 nvme0n4: ios=2106/2295, merge=0/0, ticks=19813/18253, in_queue=38066, util=97.89% 00:43:42.805 10:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:42.805 [global] 00:43:42.805 thread=1 00:43:42.805 invalidate=1 00:43:42.805 rw=randwrite 00:43:42.805 time_based=1 00:43:42.805 runtime=1 00:43:42.805 ioengine=libaio 00:43:42.805 direct=1 00:43:42.805 bs=4096 00:43:42.805 iodepth=128 00:43:42.805 norandommap=0 00:43:42.805 numjobs=1 00:43:42.805 00:43:42.805 verify_dump=1 00:43:42.805 verify_backlog=512 00:43:42.805 verify_state_save=0 00:43:42.805 do_verify=1 00:43:42.805 verify=crc32c-intel 00:43:42.805 [job0] 00:43:42.805 filename=/dev/nvme0n1 00:43:42.805 [job1] 00:43:42.805 filename=/dev/nvme0n2 00:43:42.805 [job2] 00:43:42.805 filename=/dev/nvme0n3 00:43:42.805 [job3] 00:43:42.805 filename=/dev/nvme0n4 00:43:42.805 Could not set queue depth (nvme0n1) 00:43:42.805 Could not set queue depth (nvme0n2) 00:43:42.805 Could not set queue depth (nvme0n3) 00:43:42.805 Could not set queue depth (nvme0n4) 00:43:42.805 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:42.805 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:42.805 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:42.805 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:42.805 fio-3.35 00:43:42.805 Starting 4 threads 00:43:44.182 00:43:44.182 job0: (groupid=0, jobs=1): err= 0: pid=2297438: Mon Dec 9 10:54:28 2024 00:43:44.182 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:43:44.182 slat (usec): min=3, max=14505, avg=130.21, stdev=857.13 00:43:44.182 clat (usec): min=7128, max=41994, avg=16510.67, stdev=7292.64 00:43:44.182 lat (usec): min=7137, max=42034, avg=16640.87, stdev=7342.08 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 7177], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10945], 00:43:44.182 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13304], 60.00th=[16909], 00:43:44.182 | 70.00th=[17957], 80.00th=[20841], 90.00th=[28443], 95.00th=[32375], 00:43:44.182 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:43:44.182 | 99.99th=[42206] 00:43:44.182 write: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1011msec); 0 zone resets 00:43:44.182 slat (usec): min=4, max=17008, avg=122.34, stdev=791.15 00:43:44.182 clat (usec): min=3258, max=37648, avg=17011.42, stdev=6874.25 00:43:44.182 lat (usec): min=3279, max=37660, avg=17133.76, stdev=6912.77 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:43:44.182 | 30.00th=[11207], 40.00th=[13042], 50.00th=[15795], 60.00th=[18482], 00:43:44.182 | 70.00th=[19792], 80.00th=[21890], 90.00th=[27395], 95.00th=[32113], 00:43:44.182 | 99.00th=[32900], 99.50th=[33817], 99.90th=[37487], 99.95th=[37487], 00:43:44.182 | 99.99th=[37487] 00:43:44.182 bw ( KiB/s): min=12120, max=19288, per=24.29%, avg=15704.00, stdev=5068.54, samples=2 00:43:44.182 iops : min= 3030, max= 4822, avg=3926.00, stdev=1267.14, samples=2 00:43:44.182 lat (msec) : 4=0.03%, 10=6.93%, 20=69.25%, 50=23.80% 00:43:44.182 cpu : usr=4.46%, sys=5.84%, ctx=356, majf=0, minf=1 00:43:44.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:44.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:44.182 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:44.182 job1: (groupid=0, jobs=1): err= 0: pid=2297439: Mon Dec 9 10:54:28 2024 00:43:44.182 read: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1005msec) 00:43:44.182 slat (usec): min=3, max=8491, avg=94.61, stdev=579.15 00:43:44.182 clat (usec): min=542, max=33663, avg=12428.29, stdev=4030.12 00:43:44.182 lat (usec): min=4008, max=33670, avg=12522.90, stdev=4050.56 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:43:44.182 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11207], 60.00th=[11731], 00:43:44.182 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17433], 95.00th=[20055], 00:43:44.182 | 99.00th=[27919], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:43:44.182 | 99.99th=[33817] 00:43:44.182 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:43:44.182 slat (usec): min=4, max=15091, avg=94.54, stdev=593.11 00:43:44.182 clat (usec): min=5322, max=27379, avg=12621.36, stdev=3646.43 00:43:44.182 lat (usec): min=5343, max=33670, avg=12715.90, stdev=3677.42 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 6849], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10159], 00:43:44.182 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:43:44.182 | 70.00th=[12256], 80.00th=[15664], 90.00th=[17433], 95.00th=[20055], 00:43:44.182 | 99.00th=[25560], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:43:44.182 | 99.99th=[27395] 00:43:44.182 bw ( KiB/s): min=20288, max=20672, per=31.67%, avg=20480.00, stdev=271.53, samples=2 00:43:44.182 iops : min= 5072, max= 5168, avg=5120.00, stdev=67.88, samples=2 00:43:44.182 lat (usec) : 750=0.01% 00:43:44.182 lat (msec) : 10=21.10%, 20=73.69%, 50=5.21% 00:43:44.182 cpu : usr=4.38%, sys=8.86%, ctx=453, majf=0, minf=2 00:43:44.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:44.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:44.182 issued rwts: total=5034,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:44.182 job2: (groupid=0, jobs=1): err= 0: pid=2297440: Mon Dec 9 10:54:28 2024 00:43:44.182 read: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1004msec) 00:43:44.182 slat (usec): min=3, max=30107, avg=161.92, stdev=1177.64 00:43:44.182 clat (msec): min=3, max=122, avg=20.51, stdev=18.55 00:43:44.182 lat (msec): min=3, max=122, avg=20.68, stdev=18.65 00:43:44.182 clat percentiles (msec): 00:43:44.182 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:43:44.182 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:43:44.182 | 70.00th=[ 19], 80.00th=[ 24], 90.00th=[ 32], 95.00th=[ 47], 00:43:44.182 | 99.00th=[ 113], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:43:44.182 | 99.99th=[ 123] 00:43:44.182 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:43:44.182 slat (usec): min=4, max=7981, avg=120.90, stdev=577.83 00:43:44.182 clat (usec): min=8759, max=42817, avg=16872.18, stdev=4258.09 00:43:44.182 lat (usec): min=9267, max=42826, avg=16993.08, stdev=4284.81 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 9765], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:43:44.182 | 30.00th=[13829], 40.00th=[14484], 50.00th=[15664], 60.00th=[17433], 00:43:44.182 | 70.00th=[19006], 80.00th=[20055], 90.00th=[23462], 95.00th=[23987], 00:43:44.182 | 99.00th=[28181], 99.50th=[30540], 99.90th=[35390], 99.95th=[35390], 00:43:44.182 | 99.99th=[42730] 00:43:44.182 bw ( KiB/s): min=11352, max=17320, per=22.17%, avg=14336.00, stdev=4220.01, samples=2 00:43:44.182 iops : min= 2838, max= 4330, avg=3584.00, stdev=1055.00, samples=2 00:43:44.182 lat (msec) : 4=0.18%, 10=2.53%, 20=74.23%, 50=20.72%, 100=1.10% 00:43:44.182 lat (msec) : 250=1.24% 00:43:44.182 cpu : usr=1.69%, sys=9.07%, ctx=399, majf=0, minf=1 00:43:44.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:44.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:44.182 issued rwts: total=3207,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:44.182 job3: (groupid=0, jobs=1): err= 0: pid=2297442: Mon Dec 9 10:54:28 2024 00:43:44.182 read: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:43:44.182 slat (usec): min=2, max=12358, avg=110.82, stdev=773.48 00:43:44.182 clat (usec): min=1139, max=31985, avg=16743.79, stdev=5051.98 00:43:44.182 lat (usec): min=1145, max=32001, avg=16854.61, stdev=5087.35 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 3851], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[12649], 00:43:44.182 | 30.00th=[13566], 40.00th=[15139], 50.00th=[16712], 60.00th=[17433], 00:43:44.182 | 70.00th=[18744], 80.00th=[21365], 90.00th=[23725], 95.00th=[25035], 00:43:44.182 | 99.00th=[30016], 99.50th=[30016], 99.90th=[30016], 99.95th=[30802], 00:43:44.182 | 99.99th=[32113] 00:43:44.182 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:43:44.182 slat (usec): min=3, max=25951, avg=147.19, stdev=1061.19 00:43:44.182 clat (usec): min=4405, max=65299, avg=19447.32, stdev=10121.40 00:43:44.182 lat (usec): min=4411, max=65307, avg=19594.51, stdev=10188.23 00:43:44.182 clat percentiles (usec): 00:43:44.182 | 1.00th=[ 6652], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[13304], 00:43:44.182 | 30.00th=[14484], 40.00th=[15533], 50.00th=[17171], 60.00th=[19006], 00:43:44.182 | 70.00th=[19268], 80.00th=[22676], 90.00th=[34866], 95.00th=[41157], 00:43:44.182 | 99.00th=[56361], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:43:44.182 | 99.99th=[65274] 00:43:44.182 bw ( KiB/s): min=12360, max=16312, per=22.17%, avg=14336.00, stdev=2794.49, samples=2 00:43:44.182 iops : min= 3090, max= 4078, avg=3584.00, stdev=698.62, samples=2 00:43:44.182 lat (msec) : 2=0.17%, 4=0.37%, 10=7.75%, 20=65.74%, 50=24.34% 00:43:44.182 lat (msec) : 100=1.63% 00:43:44.182 cpu : usr=2.39%, sys=4.99%, ctx=248, majf=0, minf=1 00:43:44.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:44.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:44.182 issued rwts: total=3450,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:44.182 00:43:44.182 Run status group 0 (all jobs): 00:43:44.182 READ: bw=59.0MiB/s (61.9MB/s), 12.5MiB/s-19.6MiB/s (13.1MB/s-20.5MB/s), io=59.7MiB (62.6MB), run=1004-1011msec 00:43:44.182 WRITE: bw=63.1MiB/s (66.2MB/s), 13.9MiB/s-19.9MiB/s (14.6MB/s-20.9MB/s), io=63.8MiB (66.9MB), run=1004-1011msec 00:43:44.182 00:43:44.182 Disk stats (read/write): 00:43:44.182 nvme0n1: ios=3122/3479, merge=0/0, ticks=21976/30343, in_queue=52319, util=89.98% 00:43:44.182 nvme0n2: ios=4114/4544, merge=0/0, ticks=22647/24557, in_queue=47204, util=89.91% 00:43:44.182 nvme0n3: ios=2615/2935, merge=0/0, ticks=17747/14392, in_queue=32139, util=96.62% 00:43:44.182 nvme0n4: ios=3089/3105, merge=0/0, ticks=33911/44925, in_queue=78836, util=89.64% 00:43:44.182 10:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:44.182 10:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2297579 00:43:44.182 10:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:44.182 10:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:44.182 [global] 00:43:44.182 thread=1 00:43:44.182 invalidate=1 00:43:44.182 rw=read 00:43:44.182 time_based=1 00:43:44.182 runtime=10 00:43:44.182 ioengine=libaio 00:43:44.182 direct=1 00:43:44.182 bs=4096 00:43:44.182 iodepth=1 00:43:44.182 norandommap=1 00:43:44.182 numjobs=1 00:43:44.182 00:43:44.182 [job0] 00:43:44.182 filename=/dev/nvme0n1 00:43:44.182 [job1] 00:43:44.183 filename=/dev/nvme0n2 00:43:44.183 [job2] 00:43:44.183 filename=/dev/nvme0n3 00:43:44.183 [job3] 00:43:44.183 filename=/dev/nvme0n4 00:43:44.183 Could not set queue depth (nvme0n1) 00:43:44.183 Could not set queue depth (nvme0n2) 00:43:44.183 Could not set queue depth (nvme0n3) 00:43:44.183 Could not set queue depth (nvme0n4) 00:43:44.442 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.442 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.442 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.442 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:44.442 fio-3.35 00:43:44.442 Starting 4 threads 00:43:47.728 10:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:47.728 10:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:47.728 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=20250624, buflen=4096 00:43:47.728 fio: pid=2297670, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:47.987 10:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:47.987 10:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:47.987 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28717056, buflen=4096 00:43:47.987 fio: pid=2297669, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:48.245 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50126848, buflen=4096 00:43:48.245 fio: pid=2297667, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:48.245 10:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:48.245 10:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:48.812 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:48.812 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:48.812 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51822592, buflen=4096 00:43:48.812 fio: pid=2297668, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:48.812 00:43:48.812 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2297667: Mon Dec 9 10:54:33 2024 00:43:48.812 read: IOPS=3281, BW=12.8MiB/s (13.4MB/s)(47.8MiB/3730msec) 00:43:48.812 slat (usec): min=5, max=13996, avg=12.41, stdev=194.01 00:43:48.812 clat (usec): min=187, max=40915, avg=288.00, stdev=375.81 00:43:48.812 lat (usec): min=194, max=40923, avg=300.42, stdev=423.14 00:43:48.812 clat percentiles (usec): 00:43:48.812 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:43:48.812 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 273], 00:43:48.812 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 449], 00:43:48.812 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 914], 99.95th=[ 1156], 00:43:48.812 | 99.99th=[ 3392] 00:43:48.812 bw ( KiB/s): min=10944, max=15064, per=36.14%, avg=13144.14, stdev=1453.61, samples=7 00:43:48.812 iops : min= 2736, max= 3766, avg=3286.00, stdev=363.40, samples=7 00:43:48.812 lat (usec) : 250=38.72%, 500=58.77%, 750=2.39%, 1000=0.04% 00:43:48.812 lat (msec) : 2=0.05%, 4=0.02%, 50=0.01% 00:43:48.812 cpu : usr=1.72%, sys=4.88%, ctx=12244, majf=0, minf=1 00:43:48.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.812 issued rwts: total=12239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:48.812 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2297668: Mon Dec 9 10:54:33 2024 00:43:48.812 read: IOPS=3122, BW=12.2MiB/s (12.8MB/s)(49.4MiB/4052msec) 00:43:48.812 slat (usec): min=5, max=24365, avg=15.09, stdev=294.12 00:43:48.812 clat (usec): min=198, max=41053, avg=300.33, stdev=1300.12 00:43:48.812 lat (usec): min=204, max=41069, avg=315.42, stdev=1333.07 00:43:48.812 clat percentiles (usec): 00:43:48.812 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:43:48.812 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:43:48.812 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:43:48.812 | 99.00th=[ 408], 99.50th=[ 457], 99.90th=[39060], 99.95th=[41157], 00:43:48.812 | 99.99th=[41157] 00:43:48.812 bw ( KiB/s): min= 336, max=15392, per=34.14%, avg=12416.43, stdev=5395.55, samples=7 00:43:48.812 iops : min= 84, max= 3848, avg=3104.00, stdev=1348.87, samples=7 00:43:48.812 lat (usec) : 250=49.08%, 500=50.57%, 750=0.08%, 1000=0.07% 00:43:48.812 lat (msec) : 2=0.09%, 4=0.01%, 50=0.10% 00:43:48.812 cpu : usr=1.46%, sys=4.94%, ctx=12662, majf=0, minf=2 00:43:48.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.812 issued rwts: total=12653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:48.813 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2297669: Mon Dec 9 10:54:33 2024 00:43:48.813 read: IOPS=2083, BW=8332KiB/s (8532kB/s)(27.4MiB/3366msec) 00:43:48.813 slat (nsec): min=5134, max=53757, avg=10835.91, stdev=3716.06 00:43:48.813 clat (usec): min=215, max=44949, avg=463.14, stdev=2533.78 00:43:48.813 lat (usec): min=224, max=44970, avg=473.98, stdev=2534.06 00:43:48.813 clat percentiles (usec): 00:43:48.813 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 253], 00:43:48.813 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:43:48.813 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 408], 95.00th=[ 457], 00:43:48.813 | 99.00th=[ 553], 99.50th=[ 963], 99.90th=[41157], 99.95th=[41681], 00:43:48.813 | 99.99th=[44827] 00:43:48.813 bw ( KiB/s): min= 160, max=13624, per=21.49%, avg=7816.00, stdev=5780.13, samples=6 00:43:48.813 iops : min= 40, max= 3406, avg=1954.00, stdev=1445.03, samples=6 00:43:48.813 lat (usec) : 250=18.43%, 500=79.15%, 750=1.88%, 1000=0.03% 00:43:48.813 lat (msec) : 2=0.11%, 50=0.39% 00:43:48.813 cpu : usr=0.80%, sys=3.77%, ctx=7012, majf=0, minf=2 00:43:48.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.813 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.813 issued rwts: total=7012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:48.813 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2297670: Mon Dec 9 10:54:33 2024 00:43:48.813 read: IOPS=1655, BW=6621KiB/s (6780kB/s)(19.3MiB/2987msec) 00:43:48.813 slat (nsec): min=5318, max=53530, avg=10981.14, stdev=5089.14 00:43:48.813 clat (usec): min=202, max=42010, avg=585.66, stdev=3416.28 00:43:48.813 lat (usec): min=221, max=42025, avg=596.64, stdev=3416.72 00:43:48.813 clat percentiles (usec): 00:43:48.813 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 260], 00:43:48.813 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:43:48.813 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 420], 00:43:48.813 | 99.00th=[ 635], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:43:48.813 | 99.99th=[42206] 00:43:48.813 bw ( KiB/s): min= 112, max=12712, per=20.05%, avg=7292.80, stdev=6355.39, samples=5 00:43:48.813 iops : min= 28, max= 3178, avg=1823.20, stdev=1588.85, samples=5 00:43:48.813 lat (usec) : 250=10.76%, 500=87.22%, 750=1.09%, 1000=0.08% 00:43:48.813 lat (msec) : 2=0.08%, 4=0.04%, 50=0.71% 00:43:48.813 cpu : usr=0.60%, sys=2.34%, ctx=4945, majf=0, minf=2 00:43:48.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.813 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.813 issued rwts: total=4945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:48.813 00:43:48.813 Run status group 0 (all jobs): 00:43:48.813 READ: bw=35.5MiB/s (37.2MB/s), 6621KiB/s-12.8MiB/s (6780kB/s-13.4MB/s), io=144MiB (151MB), run=2987-4052msec 00:43:48.813 00:43:48.813 Disk stats (read/write): 00:43:48.813 nvme0n1: ios=11773/0, merge=0/0, ticks=3366/0, in_queue=3366, util=94.74% 00:43:48.813 nvme0n2: ios=11988/0, merge=0/0, ticks=3561/0, in_queue=3561, util=95.10% 00:43:48.813 nvme0n3: ios=6986/0, merge=0/0, ticks=3199/0, in_queue=3199, util=96.92% 00:43:48.813 nvme0n4: ios=4931/0, merge=0/0, ticks=2744/0, in_queue=2744, util=96.69% 00:43:49.073 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:49.073 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:49.332 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:49.332 10:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:49.899 10:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:49.899 10:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:50.159 10:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:50.159 10:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:50.729 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:50.729 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2297579 00:43:50.729 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:50.729 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:50.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:50.988 nvmf hotplug test: fio failed as expected 00:43:50.988 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:51.247 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:51.247 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:51.247 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:51.247 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:51.248 rmmod nvme_tcp 00:43:51.248 rmmod nvme_fabrics 00:43:51.248 rmmod nvme_keyring 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2295168 ']' 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2295168 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2295168 ']' 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2295168 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:51.248 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2295168 00:43:51.508 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:51.508 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:51.508 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2295168' 00:43:51.508 killing process with pid 2295168 00:43:51.508 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2295168 00:43:51.508 10:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2295168 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:51.768 10:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:53.679 00:43:53.679 real 0m31.007s 00:43:53.679 user 1m22.598s 00:43:53.679 sys 0m13.746s 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.679 ************************************ 00:43:53.679 END TEST nvmf_fio_target 00:43:53.679 ************************************ 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:53.679 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:53.939 ************************************ 00:43:53.939 START TEST nvmf_bdevio 00:43:53.939 ************************************ 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:53.939 * Looking for test storage... 00:43:53.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.939 --rc genhtml_branch_coverage=1 00:43:53.939 --rc genhtml_function_coverage=1 00:43:53.939 --rc genhtml_legend=1 00:43:53.939 --rc geninfo_all_blocks=1 00:43:53.939 --rc geninfo_unexecuted_blocks=1 00:43:53.939 00:43:53.939 ' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.939 --rc genhtml_branch_coverage=1 00:43:53.939 --rc genhtml_function_coverage=1 00:43:53.939 --rc genhtml_legend=1 00:43:53.939 --rc geninfo_all_blocks=1 00:43:53.939 --rc geninfo_unexecuted_blocks=1 00:43:53.939 00:43:53.939 ' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.939 --rc genhtml_branch_coverage=1 00:43:53.939 --rc genhtml_function_coverage=1 00:43:53.939 --rc genhtml_legend=1 00:43:53.939 --rc geninfo_all_blocks=1 00:43:53.939 --rc geninfo_unexecuted_blocks=1 00:43:53.939 00:43:53.939 ' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:53.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.939 --rc genhtml_branch_coverage=1 00:43:53.939 --rc genhtml_function_coverage=1 00:43:53.939 --rc genhtml_legend=1 00:43:53.939 --rc geninfo_all_blocks=1 00:43:53.939 --rc geninfo_unexecuted_blocks=1 00:43:53.939 00:43:53.939 ' 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:53.939 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:53.940 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:54.199 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:54.199 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:54.199 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:54.199 10:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:43:57.494 Found 0000:84:00.0 (0x8086 - 0x159b) 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:57.494 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:43:57.495 Found 0000:84:00.1 (0x8086 - 0x159b) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:43:57.495 Found net devices under 0000:84:00.0: cvl_0_0 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:43:57.495 Found net devices under 0000:84:00.1: cvl_0_1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:57.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:57.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:43:57.495 00:43:57.495 --- 10.0.0.2 ping statistics --- 00:43:57.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.495 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:57.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:57.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:43:57.495 00:43:57.495 --- 10.0.0.1 ping statistics --- 00:43:57.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.495 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2300571 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2300571 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2300571 ']' 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:57.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.495 10:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.495 [2024-12-09 10:54:41.841024] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:57.495 [2024-12-09 10:54:41.842346] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:43:57.495 [2024-12-09 10:54:41.842414] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:57.495 [2024-12-09 10:54:41.932025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:57.495 [2024-12-09 10:54:41.998207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:57.495 [2024-12-09 10:54:41.998271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:57.495 [2024-12-09 10:54:41.998288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:57.495 [2024-12-09 10:54:41.998302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:57.495 [2024-12-09 10:54:41.998314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:57.495 [2024-12-09 10:54:42.000141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:57.495 [2024-12-09 10:54:42.000232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:57.495 [2024-12-09 10:54:42.000286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:57.496 [2024-12-09 10:54:42.000290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:57.496 [2024-12-09 10:54:42.096308] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:57.496 [2024-12-09 10:54:42.096495] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:57.496 [2024-12-09 10:54:42.096795] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:57.496 [2024-12-09 10:54:42.097389] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:57.496 [2024-12-09 10:54:42.097605] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:57.496 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.496 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:57.496 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:57.496 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:57.496 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 [2024-12-09 10:54:42.173268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 Malloc0 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:57.755 [2024-12-09 10:54:42.249432] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:57.755 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:57.756 { 00:43:57.756 "params": { 00:43:57.756 "name": "Nvme$subsystem", 00:43:57.756 "trtype": "$TEST_TRANSPORT", 00:43:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:57.756 "adrfam": "ipv4", 00:43:57.756 "trsvcid": "$NVMF_PORT", 00:43:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:57.756 "hdgst": ${hdgst:-false}, 00:43:57.756 "ddgst": ${ddgst:-false} 00:43:57.756 }, 00:43:57.756 "method": "bdev_nvme_attach_controller" 00:43:57.756 } 00:43:57.756 EOF 00:43:57.756 )") 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:57.756 10:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:57.756 "params": { 00:43:57.756 "name": "Nvme1", 00:43:57.756 "trtype": "tcp", 00:43:57.756 "traddr": "10.0.0.2", 00:43:57.756 "adrfam": "ipv4", 00:43:57.756 "trsvcid": "4420", 00:43:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:57.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:57.756 "hdgst": false, 00:43:57.756 "ddgst": false 00:43:57.756 }, 00:43:57.756 "method": "bdev_nvme_attach_controller" 00:43:57.756 }' 00:43:57.756 [2024-12-09 10:54:42.303415] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:43:57.756 [2024-12-09 10:54:42.303500] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2300599 ] 00:43:57.756 [2024-12-09 10:54:42.379004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:58.015 [2024-12-09 10:54:42.443770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:58.015 [2024-12-09 10:54:42.443796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:58.015 [2024-12-09 10:54:42.443800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:58.015 I/O targets: 00:43:58.015 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:58.015 00:43:58.015 00:43:58.015 CUnit - A unit testing framework for C - Version 2.1-3 00:43:58.015 http://cunit.sourceforge.net/ 00:43:58.015 00:43:58.015 00:43:58.015 Suite: bdevio tests on: Nvme1n1 00:43:58.015 Test: blockdev write read block ...passed 00:43:58.275 Test: blockdev write zeroes read block ...passed 00:43:58.275 Test: blockdev write zeroes read no split ...passed 00:43:58.275 Test: blockdev write zeroes read split ...passed 00:43:58.275 Test: blockdev write zeroes read split partial ...passed 00:43:58.275 Test: blockdev reset ...[2024-12-09 10:54:42.740226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:58.275 [2024-12-09 10:54:42.740349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bba70 (9): Bad file descriptor 00:43:58.275 [2024-12-09 10:54:42.792940] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:58.275 passed 00:43:58.275 Test: blockdev write read 8 blocks ...passed 00:43:58.275 Test: blockdev write read size > 128k ...passed 00:43:58.275 Test: blockdev write read invalid size ...passed 00:43:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:58.275 Test: blockdev write read max offset ...passed 00:43:58.535 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:58.535 Test: blockdev writev readv 8 blocks ...passed 00:43:58.535 Test: blockdev writev readv 30 x 1block ...passed 00:43:58.535 Test: blockdev writev readv block ...passed 00:43:58.535 Test: blockdev writev readv size > 128k ...passed 00:43:58.535 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:58.536 Test: blockdev comparev and writev ...[2024-12-09 10:54:43.048757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.048794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.048832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.048851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.049337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.049363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.049401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.049891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.049916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.049937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.049953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.050422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.050446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.050468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:58.536 [2024-12-09 10:54:43.050484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:58.536 passed 00:43:58.536 Test: blockdev nvme passthru rw ...passed 00:43:58.536 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:54:43.134050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.536 [2024-12-09 10:54:43.134077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.134226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.536 [2024-12-09 10:54:43.134250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.134399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.536 [2024-12-09 10:54:43.134422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:58.536 [2024-12-09 10:54:43.134572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:58.536 [2024-12-09 10:54:43.134596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:58.536 passed 00:43:58.536 Test: blockdev nvme admin passthru ...passed 00:43:58.796 Test: blockdev copy ...passed 00:43:58.796 00:43:58.796 Run Summary: Type Total Ran Passed Failed Inactive 00:43:58.796 suites 1 1 n/a 0 0 00:43:58.796 tests 23 23 23 0 0 00:43:58.796 asserts 152 152 152 0 n/a 00:43:58.796 00:43:58.796 Elapsed time = 1.133 seconds 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:58.796 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:58.796 rmmod nvme_tcp 00:43:58.796 rmmod nvme_fabrics 00:43:58.796 rmmod nvme_keyring 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2300571 ']' 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2300571 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2300571 ']' 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2300571 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2300571 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2300571' 00:43:59.057 killing process with pid 2300571 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2300571 00:43:59.057 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2300571 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:59.318 10:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:01.229 00:44:01.229 real 0m7.462s 00:44:01.229 user 0m8.430s 00:44:01.229 sys 0m3.463s 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:01.229 ************************************ 00:44:01.229 END TEST nvmf_bdevio 00:44:01.229 ************************************ 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:01.229 00:44:01.229 real 4m57.652s 00:44:01.229 user 10m19.714s 00:44:01.229 sys 1m49.949s 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:01.229 10:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:01.229 ************************************ 00:44:01.229 END TEST nvmf_target_core_interrupt_mode 00:44:01.229 ************************************ 00:44:01.489 10:54:45 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:01.489 10:54:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:44:01.489 10:54:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:01.489 10:54:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:01.489 ************************************ 00:44:01.489 START TEST nvmf_interrupt 00:44:01.489 ************************************ 00:44:01.489 10:54:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:01.489 * Looking for test storage... 00:44:01.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:01.489 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:01.489 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:44:01.489 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:01.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:01.490 --rc genhtml_branch_coverage=1 00:44:01.490 --rc genhtml_function_coverage=1 00:44:01.490 --rc genhtml_legend=1 00:44:01.490 --rc geninfo_all_blocks=1 00:44:01.490 --rc geninfo_unexecuted_blocks=1 00:44:01.490 00:44:01.490 ' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:01.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:01.490 --rc genhtml_branch_coverage=1 00:44:01.490 --rc genhtml_function_coverage=1 00:44:01.490 --rc genhtml_legend=1 00:44:01.490 --rc geninfo_all_blocks=1 00:44:01.490 --rc geninfo_unexecuted_blocks=1 00:44:01.490 00:44:01.490 ' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:01.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:01.490 --rc genhtml_branch_coverage=1 00:44:01.490 --rc genhtml_function_coverage=1 00:44:01.490 --rc genhtml_legend=1 00:44:01.490 --rc geninfo_all_blocks=1 00:44:01.490 --rc geninfo_unexecuted_blocks=1 00:44:01.490 00:44:01.490 ' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:01.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:01.490 --rc genhtml_branch_coverage=1 00:44:01.490 --rc genhtml_function_coverage=1 00:44:01.490 --rc genhtml_legend=1 00:44:01.490 --rc geninfo_all_blocks=1 00:44:01.490 --rc geninfo_unexecuted_blocks=1 00:44:01.490 00:44:01.490 ' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:44:01.490 10:54:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:04.782 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:04.782 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:44:04.782 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:04.782 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:04.782 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:04.783 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:04.783 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:04.783 Found net devices under 0000:84:00.0: cvl_0_0 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:04.783 Found net devices under 0000:84:00.1: cvl_0_1 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:04.783 10:54:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:04.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:04.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:44:04.783 00:44:04.783 --- 10.0.0.2 ping statistics --- 00:44:04.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:04.783 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:04.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:04.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:44:04.783 00:44:04.783 --- 10.0.0.1 ping statistics --- 00:44:04.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:04.783 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2302819 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2302819 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2302819 ']' 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:04.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:04.783 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:04.783 [2024-12-09 10:54:49.266223] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:04.783 [2024-12-09 10:54:49.267765] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:44:04.783 [2024-12-09 10:54:49.267842] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:04.783 [2024-12-09 10:54:49.370181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:05.045 [2024-12-09 10:54:49.488797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:05.045 [2024-12-09 10:54:49.488917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:05.045 [2024-12-09 10:54:49.488955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:05.045 [2024-12-09 10:54:49.488986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:05.045 [2024-12-09 10:54:49.489015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:05.045 [2024-12-09 10:54:49.492559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:05.045 [2024-12-09 10:54:49.492631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.045 [2024-12-09 10:54:49.654915] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:05.045 [2024-12-09 10:54:49.654924] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:05.045 [2024-12-09 10:54:49.655513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:44:05.306 5000+0 records in 00:44:05.306 5000+0 records out 00:44:05.306 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0271599 s, 377 MB/s 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.306 AIO0 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.306 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.306 [2024-12-09 10:54:49.909888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:05.307 [2024-12-09 10:54:49.938118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2302819 0 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 0 idle 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:05.307 10:54:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302819 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.47 reactor_0' 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302819 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.47 reactor_0 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2302819 1 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 1 idle 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:05.568 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302825 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302825 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2302983 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2302819 0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2302819 0 busy 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:05.829 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302819 root 20 0 128.2g 48384 34944 R 60.0 0.1 0:00.56 reactor_0' 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302819 root 20 0 128.2g 48384 34944 R 60.0 0.1 0:00.56 reactor_0 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2302819 1 00:44:06.088 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2302819 1 busy 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302825 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.21 reactor_1' 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302825 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.21 reactor_1 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:06.089 10:54:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2302983 00:44:16.072 Initializing NVMe Controllers 00:44:16.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:16.072 Controller IO queue size 256, less than required. 00:44:16.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:16.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:16.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:16.072 Initialization complete. Launching workers. 00:44:16.072 ======================================================== 00:44:16.072 Latency(us) 00:44:16.072 Device Information : IOPS MiB/s Average min max 00:44:16.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14267.20 55.73 17952.96 4982.76 23759.93 00:44:16.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14104.50 55.10 18160.78 5173.18 59610.71 00:44:16.072 ======================================================== 00:44:16.072 Total : 28371.70 110.83 18056.27 4982.76 59610.71 00:44:16.072 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2302819 0 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 0 idle 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:16.072 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302819 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.42 reactor_0' 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302819 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.42 reactor_0 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:16.073 10:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2302819 1 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 1 idle 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302825 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.99 reactor_1' 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302825 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.99 reactor_1 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:16.334 10:55:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:16.903 10:55:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:16.903 10:55:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:44:16.903 10:55:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:44:16.903 10:55:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:44:16.903 10:55:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2302819 0 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 0 idle 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302819 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.61 reactor_0' 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302819 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.61 reactor_0 00:44:18.814 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:18.815 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:19.074 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:19.074 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2302819 1 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2302819 1 idle 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2302819 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2302819 -w 256 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2302825 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.06 reactor_1' 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2302825 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.06 reactor_1 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:19.075 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:19.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:19.334 rmmod nvme_tcp 00:44:19.334 rmmod nvme_fabrics 00:44:19.334 rmmod nvme_keyring 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2302819 ']' 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2302819 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2302819 ']' 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2302819 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2302819 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2302819' 00:44:19.334 killing process with pid 2302819 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2302819 00:44:19.334 10:55:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2302819 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:19.905 10:55:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:21.809 10:55:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:21.809 00:44:21.809 real 0m20.462s 00:44:21.809 user 0m37.829s 00:44:21.809 sys 0m8.017s 00:44:21.809 10:55:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:21.809 10:55:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:21.809 ************************************ 00:44:21.809 END TEST nvmf_interrupt 00:44:21.809 ************************************ 00:44:21.809 00:44:21.809 real 32m39.554s 00:44:21.809 user 73m38.682s 00:44:21.809 sys 8m41.104s 00:44:21.809 10:55:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:21.809 10:55:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.809 ************************************ 00:44:21.809 END TEST nvmf_tcp 00:44:21.809 ************************************ 00:44:21.809 10:55:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:44:21.810 10:55:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:21.810 10:55:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:21.810 10:55:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:21.810 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:44:22.068 ************************************ 00:44:22.068 START TEST spdkcli_nvmf_tcp 00:44:22.068 ************************************ 00:44:22.068 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:22.068 * Looking for test storage... 00:44:22.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:22.068 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:22.068 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:44:22.068 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.328 --rc genhtml_branch_coverage=1 00:44:22.328 --rc genhtml_function_coverage=1 00:44:22.328 --rc genhtml_legend=1 00:44:22.328 --rc geninfo_all_blocks=1 00:44:22.328 --rc geninfo_unexecuted_blocks=1 00:44:22.328 00:44:22.328 ' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.328 --rc genhtml_branch_coverage=1 00:44:22.328 --rc genhtml_function_coverage=1 00:44:22.328 --rc genhtml_legend=1 00:44:22.328 --rc geninfo_all_blocks=1 00:44:22.328 --rc geninfo_unexecuted_blocks=1 00:44:22.328 00:44:22.328 ' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.328 --rc genhtml_branch_coverage=1 00:44:22.328 --rc genhtml_function_coverage=1 00:44:22.328 --rc genhtml_legend=1 00:44:22.328 --rc geninfo_all_blocks=1 00:44:22.328 --rc geninfo_unexecuted_blocks=1 00:44:22.328 00:44:22.328 ' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.328 --rc genhtml_branch_coverage=1 00:44:22.328 --rc genhtml_function_coverage=1 00:44:22.328 --rc genhtml_legend=1 00:44:22.328 --rc geninfo_all_blocks=1 00:44:22.328 --rc geninfo_unexecuted_blocks=1 00:44:22.328 00:44:22.328 ' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:22.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:22.328 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2304978 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2304978 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2304978 ']' 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:22.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:22.329 10:55:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:22.329 [2024-12-09 10:55:06.882411] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:44:22.329 [2024-12-09 10:55:06.882587] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304978 ] 00:44:22.588 [2024-12-09 10:55:07.051407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:22.588 [2024-12-09 10:55:07.170791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:22.588 [2024-12-09 10:55:07.170808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:23.526 10:55:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:23.526 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:23.526 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:23.526 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:23.526 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:23.526 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:23.526 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:23.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:23.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:23.526 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:23.526 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:23.526 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:23.526 ' 00:44:26.845 [2024-12-09 10:55:11.267518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:28.225 [2024-12-09 10:55:12.554013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:30.764 [2024-12-09 10:55:14.962997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:32.672 [2024-12-09 10:55:17.031157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:34.051 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:34.051 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:34.051 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:34.051 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:34.051 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:34.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:34.051 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:34.311 10:55:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:34.880 10:55:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:35.139 10:55:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:35.139 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:35.139 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:35.139 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:35.139 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:35.139 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:35.139 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:35.139 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:35.139 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:35.139 ' 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:41.706 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:41.706 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:41.706 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:41.706 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2304978 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2304978 ']' 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2304978 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304978 00:44:41.706 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304978' 00:44:41.707 killing process with pid 2304978 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2304978 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2304978 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2304978 ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2304978 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2304978 ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2304978 00:44:41.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2304978) - No such process 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2304978 is not found' 00:44:41.707 Process with pid 2304978 is not found 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:41.707 00:44:41.707 real 0m19.292s 00:44:41.707 user 0m41.657s 00:44:41.707 sys 0m1.322s 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:41.707 10:55:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:41.707 ************************************ 00:44:41.707 END TEST spdkcli_nvmf_tcp 00:44:41.707 ************************************ 00:44:41.707 10:55:25 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:41.707 10:55:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:41.707 10:55:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:41.707 10:55:25 -- common/autotest_common.sh@10 -- # set +x 00:44:41.707 ************************************ 00:44:41.707 START TEST nvmf_identify_passthru 00:44:41.707 ************************************ 00:44:41.707 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:41.707 * Looking for test storage... 00:44:41.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:41.707 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:41.707 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:41.707 10:55:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.707 --rc genhtml_branch_coverage=1 00:44:41.707 --rc genhtml_function_coverage=1 00:44:41.707 --rc genhtml_legend=1 00:44:41.707 --rc geninfo_all_blocks=1 00:44:41.707 --rc geninfo_unexecuted_blocks=1 00:44:41.707 00:44:41.707 ' 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.707 --rc genhtml_branch_coverage=1 00:44:41.707 --rc genhtml_function_coverage=1 00:44:41.707 --rc genhtml_legend=1 00:44:41.707 --rc geninfo_all_blocks=1 00:44:41.707 --rc geninfo_unexecuted_blocks=1 00:44:41.707 00:44:41.707 ' 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.707 --rc genhtml_branch_coverage=1 00:44:41.707 --rc genhtml_function_coverage=1 00:44:41.707 --rc genhtml_legend=1 00:44:41.707 --rc geninfo_all_blocks=1 00:44:41.707 --rc geninfo_unexecuted_blocks=1 00:44:41.707 00:44:41.707 ' 00:44:41.707 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:41.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.707 --rc genhtml_branch_coverage=1 00:44:41.707 --rc genhtml_function_coverage=1 00:44:41.707 --rc genhtml_legend=1 00:44:41.707 --rc geninfo_all_blocks=1 00:44:41.707 --rc geninfo_unexecuted_blocks=1 00:44:41.707 00:44:41.707 ' 00:44:41.707 10:55:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.707 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.707 10:55:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:41.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.708 10:55:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.708 10:55:26 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.708 10:55:26 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.708 10:55:26 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.708 10:55:26 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:41.708 10:55:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.708 10:55:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.708 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.708 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:41.708 10:55:26 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:41.708 10:55:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:44:44.995 Found 0000:84:00.0 (0x8086 - 0x159b) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:44:44.995 Found 0000:84:00.1 (0x8086 - 0x159b) 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:44.995 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:44:44.996 Found net devices under 0000:84:00.0: cvl_0_0 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:44:44.996 Found net devices under 0000:84:00.1: cvl_0_1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:44.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:44.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:44:44.996 00:44:44.996 --- 10.0.0.2 ping statistics --- 00:44:44.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:44.996 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:44.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:44.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:44:44.996 00:44:44.996 --- 10.0.0.1 ping statistics --- 00:44:44.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:44.996 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:44.996 10:55:29 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:82:00.0 00:44:44.996 10:55:29 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:82:00.0 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:44.996 10:55:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:49.188 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:44:49.188 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:44:49.188 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:49.188 10:55:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:54.474 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:44:54.474 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:54.474 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.474 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.474 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:54.474 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2309873 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2309873 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2309873 ']' 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 [2024-12-09 10:55:38.309981] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:44:54.475 [2024-12-09 10:55:38.310169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:54.475 [2024-12-09 10:55:38.482371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:54.475 [2024-12-09 10:55:38.606481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:54.475 [2024-12-09 10:55:38.606590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:54.475 [2024-12-09 10:55:38.606626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:54.475 [2024-12-09 10:55:38.606657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:54.475 [2024-12-09 10:55:38.606682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:54.475 [2024-12-09 10:55:38.610257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:54.475 [2024-12-09 10:55:38.610363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:54.475 [2024-12-09 10:55:38.610458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:54.475 [2024-12-09 10:55:38.610462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 INFO: Log level set to 20 00:44:54.475 INFO: Requests: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "method": "nvmf_set_config", 00:44:54.475 "id": 1, 00:44:54.475 "params": { 00:44:54.475 "admin_cmd_passthru": { 00:44:54.475 "identify_ctrlr": true 00:44:54.475 } 00:44:54.475 } 00:44:54.475 } 00:44:54.475 00:44:54.475 INFO: response: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "id": 1, 00:44:54.475 "result": true 00:44:54.475 } 00:44:54.475 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 INFO: Setting log level to 20 00:44:54.475 INFO: Setting log level to 20 00:44:54.475 INFO: Log level set to 20 00:44:54.475 INFO: Log level set to 20 00:44:54.475 INFO: Requests: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "method": "framework_start_init", 00:44:54.475 "id": 1 00:44:54.475 } 00:44:54.475 00:44:54.475 INFO: Requests: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "method": "framework_start_init", 00:44:54.475 "id": 1 00:44:54.475 } 00:44:54.475 00:44:54.475 [2024-12-09 10:55:38.981555] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:54.475 INFO: response: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "id": 1, 00:44:54.475 "result": true 00:44:54.475 } 00:44:54.475 00:44:54.475 INFO: response: 00:44:54.475 { 00:44:54.475 "jsonrpc": "2.0", 00:44:54.475 "id": 1, 00:44:54.475 "result": true 00:44:54.475 } 00:44:54.475 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 INFO: Setting log level to 40 00:44:54.475 INFO: Setting log level to 40 00:44:54.475 INFO: Setting log level to 40 00:44:54.475 [2024-12-09 10:55:38.991614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.475 10:55:38 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.475 10:55:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:54.475 10:55:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:44:54.475 10:55:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.475 10:55:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.762 Nvme0n1 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.762 [2024-12-09 10:55:41.897897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.762 [ 00:44:57.762 { 00:44:57.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:57.762 "subtype": "Discovery", 00:44:57.762 "listen_addresses": [], 00:44:57.762 "allow_any_host": true, 00:44:57.762 "hosts": [] 00:44:57.762 }, 00:44:57.762 { 00:44:57.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:57.762 "subtype": "NVMe", 00:44:57.762 "listen_addresses": [ 00:44:57.762 { 00:44:57.762 "trtype": "TCP", 00:44:57.762 "adrfam": "IPv4", 00:44:57.762 "traddr": "10.0.0.2", 00:44:57.762 "trsvcid": "4420" 00:44:57.762 } 00:44:57.762 ], 00:44:57.762 "allow_any_host": true, 00:44:57.762 "hosts": [], 00:44:57.762 "serial_number": "SPDK00000000000001", 00:44:57.762 "model_number": "SPDK bdev Controller", 00:44:57.762 "max_namespaces": 1, 00:44:57.762 "min_cntlid": 1, 00:44:57.762 "max_cntlid": 65519, 00:44:57.762 "namespaces": [ 00:44:57.762 { 00:44:57.762 "nsid": 1, 00:44:57.762 "bdev_name": "Nvme0n1", 00:44:57.762 "name": "Nvme0n1", 00:44:57.762 "nguid": "C9BA2085CB4C4A2FB8AB7861FBA2532D", 00:44:57.762 "uuid": "c9ba2085-cb4c-4a2f-b8ab-7861fba2532d" 00:44:57.762 } 00:44:57.762 ] 00:44:57.762 } 00:44:57.762 ] 00:44:57.762 10:55:41 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:57.762 10:55:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:57.762 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:44:57.762 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:57.762 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:57.762 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:57.762 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:44:57.763 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:44:57.763 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:44:57.763 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:57.763 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:57.763 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:57.763 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.763 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:57.763 10:55:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:57.763 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:57.763 rmmod nvme_tcp 00:44:58.022 rmmod nvme_fabrics 00:44:58.022 rmmod nvme_keyring 00:44:58.022 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:58.022 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:58.022 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:58.022 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2309873 ']' 00:44:58.022 10:55:42 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2309873 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2309873 ']' 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2309873 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309873 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309873' 00:44:58.022 killing process with pid 2309873 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2309873 00:44:58.022 10:55:42 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2309873 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:59.929 10:55:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.929 10:55:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:59.929 10:55:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:01.842 10:55:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:01.842 00:45:01.842 real 0m20.404s 00:45:01.842 user 0m28.672s 00:45:01.842 sys 0m4.576s 00:45:01.842 10:55:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:01.842 10:55:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:01.842 ************************************ 00:45:01.842 END TEST nvmf_identify_passthru 00:45:01.842 ************************************ 00:45:01.842 10:55:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:01.842 10:55:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:01.842 10:55:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:01.842 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:45:01.842 ************************************ 00:45:01.842 START TEST nvmf_dif 00:45:01.842 ************************************ 00:45:01.842 10:55:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:01.842 * Looking for test storage... 00:45:01.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:01.842 10:55:46 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:01.842 10:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:45:01.842 10:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:02.102 10:55:46 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:45:02.102 10:55:46 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:02.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.103 --rc genhtml_branch_coverage=1 00:45:02.103 --rc genhtml_function_coverage=1 00:45:02.103 --rc genhtml_legend=1 00:45:02.103 --rc geninfo_all_blocks=1 00:45:02.103 --rc geninfo_unexecuted_blocks=1 00:45:02.103 00:45:02.103 ' 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:02.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.103 --rc genhtml_branch_coverage=1 00:45:02.103 --rc genhtml_function_coverage=1 00:45:02.103 --rc genhtml_legend=1 00:45:02.103 --rc geninfo_all_blocks=1 00:45:02.103 --rc geninfo_unexecuted_blocks=1 00:45:02.103 00:45:02.103 ' 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:02.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.103 --rc genhtml_branch_coverage=1 00:45:02.103 --rc genhtml_function_coverage=1 00:45:02.103 --rc genhtml_legend=1 00:45:02.103 --rc geninfo_all_blocks=1 00:45:02.103 --rc geninfo_unexecuted_blocks=1 00:45:02.103 00:45:02.103 ' 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:02.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.103 --rc genhtml_branch_coverage=1 00:45:02.103 --rc genhtml_function_coverage=1 00:45:02.103 --rc genhtml_legend=1 00:45:02.103 --rc geninfo_all_blocks=1 00:45:02.103 --rc geninfo_unexecuted_blocks=1 00:45:02.103 00:45:02.103 ' 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:02.103 10:55:46 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:02.103 10:55:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.103 10:55:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.103 10:55:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.103 10:55:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:02.103 10:55:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:02.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:02.103 10:55:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:02.103 10:55:46 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:45:02.103 10:55:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:45:05.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:45:05.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:45:05.394 Found net devices under 0000:84:00.0: cvl_0_0 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:45:05.394 Found net devices under 0000:84:00.1: cvl_0_1 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:05.394 10:55:49 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:05.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:05.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:45:05.395 00:45:05.395 --- 10.0.0.2 ping statistics --- 00:45:05.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:05.395 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:05.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:05.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:45:05.395 00:45:05.395 --- 10.0.0.1 ping statistics --- 00:45:05.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:05.395 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:45:05.395 10:55:49 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:06.777 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:06.777 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:45:06.777 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:06.777 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:06.777 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:06.777 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:06.777 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:06.777 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:06.777 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:06.777 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:45:06.777 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:45:06.777 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:45:06.777 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:45:06.777 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:45:06.777 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:45:06.777 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:45:06.777 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:45:07.036 10:55:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:07.036 10:55:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:07.036 10:55:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:07.036 10:55:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:07.037 10:55:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:07.037 10:55:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2313295 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:07.037 10:55:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2313295 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2313295 ']' 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:07.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:07.037 10:55:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:07.297 [2024-12-09 10:55:51.701057] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:45:07.297 [2024-12-09 10:55:51.701162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:07.297 [2024-12-09 10:55:51.844810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:07.557 [2024-12-09 10:55:51.962034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:07.557 [2024-12-09 10:55:51.962148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:07.557 [2024-12-09 10:55:51.962185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:07.557 [2024-12-09 10:55:51.962214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:07.557 [2024-12-09 10:55:51.962241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:07.557 [2024-12-09 10:55:51.963638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.816 10:55:52 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:07.816 10:55:52 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:45:07.816 10:55:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:07.816 10:55:52 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:07.816 10:55:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:07.816 10:55:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:07.816 10:55:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:07.816 10:55:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 [2024-12-09 10:55:52.341279] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.817 10:55:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 ************************************ 00:45:07.817 START TEST fio_dif_1_default 00:45:07.817 ************************************ 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 bdev_null0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:07.817 [2024-12-09 10:55:52.410932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:07.817 { 00:45:07.817 "params": { 00:45:07.817 "name": "Nvme$subsystem", 00:45:07.817 "trtype": "$TEST_TRANSPORT", 00:45:07.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:07.817 "adrfam": "ipv4", 00:45:07.817 "trsvcid": "$NVMF_PORT", 00:45:07.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:07.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:07.817 "hdgst": ${hdgst:-false}, 00:45:07.817 "ddgst": ${ddgst:-false} 00:45:07.817 }, 00:45:07.817 "method": "bdev_nvme_attach_controller" 00:45:07.817 } 00:45:07.817 EOF 00:45:07.817 )") 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:07.817 "params": { 00:45:07.817 "name": "Nvme0", 00:45:07.817 "trtype": "tcp", 00:45:07.817 "traddr": "10.0.0.2", 00:45:07.817 "adrfam": "ipv4", 00:45:07.817 "trsvcid": "4420", 00:45:07.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:07.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:07.817 "hdgst": false, 00:45:07.817 "ddgst": false 00:45:07.817 }, 00:45:07.817 "method": "bdev_nvme_attach_controller" 00:45:07.817 }' 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:07.817 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:08.076 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:08.076 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:08.076 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:08.076 10:55:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:08.336 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:08.336 fio-3.35 00:45:08.336 Starting 1 thread 00:45:20.548 00:45:20.549 filename0: (groupid=0, jobs=1): err= 0: pid=2313649: Mon Dec 9 10:56:03 2024 00:45:20.549 read: IOPS=103, BW=413KiB/s (422kB/s)(4128KiB/10007msec) 00:45:20.549 slat (nsec): min=4909, max=61300, avg=18430.86, stdev=8903.62 00:45:20.549 clat (usec): min=1046, max=43039, avg=38727.53, stdev=10590.02 00:45:20.549 lat (usec): min=1066, max=43054, avg=38745.96, stdev=10589.14 00:45:20.549 clat percentiles (usec): 00:45:20.549 | 1.00th=[ 1139], 5.00th=[ 1254], 10.00th=[41157], 20.00th=[41157], 00:45:20.549 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:45:20.549 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:20.549 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:45:20.549 | 99.99th=[43254] 00:45:20.549 bw ( KiB/s): min= 384, max= 480, per=99.63%, avg=411.20, stdev=33.28, samples=20 00:45:20.549 iops : min= 96, max= 120, avg=102.80, stdev= 8.32, samples=20 00:45:20.549 lat (msec) : 2=7.36%, 50=92.64% 00:45:20.549 cpu : usr=91.43%, sys=8.05%, ctx=14, majf=0, minf=9 00:45:20.549 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.549 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.549 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:20.549 00:45:20.549 Run status group 0 (all jobs): 00:45:20.549 READ: bw=413KiB/s (422kB/s), 413KiB/s-413KiB/s (422kB/s-422kB/s), io=4128KiB (4227kB), run=10007-10007msec 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 00:45:20.549 real 0m11.716s 00:45:20.549 user 0m10.744s 00:45:20.549 sys 0m1.250s 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 ************************************ 00:45:20.549 END TEST fio_dif_1_default 00:45:20.549 ************************************ 00:45:20.549 10:56:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:20.549 10:56:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:20.549 10:56:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 ************************************ 00:45:20.549 START TEST fio_dif_1_multi_subsystems 00:45:20.549 ************************************ 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 bdev_null0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 [2024-12-09 10:56:04.204663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 bdev_null1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:20.549 { 00:45:20.549 "params": { 00:45:20.549 "name": "Nvme$subsystem", 00:45:20.549 "trtype": "$TEST_TRANSPORT", 00:45:20.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.549 "adrfam": "ipv4", 00:45:20.549 "trsvcid": "$NVMF_PORT", 00:45:20.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.549 "hdgst": ${hdgst:-false}, 00:45:20.549 "ddgst": ${ddgst:-false} 00:45:20.549 }, 00:45:20.549 "method": "bdev_nvme_attach_controller" 00:45:20.549 } 00:45:20.549 EOF 00:45:20.549 )") 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.549 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:20.550 { 00:45:20.550 "params": { 00:45:20.550 "name": "Nvme$subsystem", 00:45:20.550 "trtype": "$TEST_TRANSPORT", 00:45:20.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.550 "adrfam": "ipv4", 00:45:20.550 "trsvcid": "$NVMF_PORT", 00:45:20.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.550 "hdgst": ${hdgst:-false}, 00:45:20.550 "ddgst": ${ddgst:-false} 00:45:20.550 }, 00:45:20.550 "method": "bdev_nvme_attach_controller" 00:45:20.550 } 00:45:20.550 EOF 00:45:20.550 )") 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:20.550 "params": { 00:45:20.550 "name": "Nvme0", 00:45:20.550 "trtype": "tcp", 00:45:20.550 "traddr": "10.0.0.2", 00:45:20.550 "adrfam": "ipv4", 00:45:20.550 "trsvcid": "4420", 00:45:20.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:20.550 "hdgst": false, 00:45:20.550 "ddgst": false 00:45:20.550 }, 00:45:20.550 "method": "bdev_nvme_attach_controller" 00:45:20.550 },{ 00:45:20.550 "params": { 00:45:20.550 "name": "Nvme1", 00:45:20.550 "trtype": "tcp", 00:45:20.550 "traddr": "10.0.0.2", 00:45:20.550 "adrfam": "ipv4", 00:45:20.550 "trsvcid": "4420", 00:45:20.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:20.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:20.550 "hdgst": false, 00:45:20.550 "ddgst": false 00:45:20.550 }, 00:45:20.550 "method": "bdev_nvme_attach_controller" 00:45:20.550 }' 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:20.550 10:56:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.550 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:20.550 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:20.550 fio-3.35 00:45:20.550 Starting 2 threads 00:45:32.758 00:45:32.758 filename0: (groupid=0, jobs=1): err= 0: pid=2315048: Mon Dec 9 10:56:15 2024 00:45:32.758 read: IOPS=99, BW=398KiB/s (408kB/s)(3984KiB/10005msec) 00:45:32.758 slat (nsec): min=8407, max=64362, avg=14603.79, stdev=7213.32 00:45:32.758 clat (usec): min=919, max=45900, avg=40132.20, stdev=9177.26 00:45:32.758 lat (usec): min=930, max=45931, avg=40146.80, stdev=9177.23 00:45:32.758 clat percentiles (usec): 00:45:32.758 | 1.00th=[ 1123], 5.00th=[ 1270], 10.00th=[41681], 20.00th=[41681], 00:45:32.758 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:45:32.758 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43779], 00:45:32.758 | 99.00th=[44827], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:45:32.758 | 99.99th=[45876] 00:45:32.758 bw ( KiB/s): min= 352, max= 480, per=34.93%, avg=396.80, stdev=35.05, samples=20 00:45:32.758 iops : min= 88, max= 120, avg=99.20, stdev= 8.76, samples=20 00:45:32.758 lat (usec) : 1000=0.40% 00:45:32.758 lat (msec) : 2=4.82%, 50=94.78% 00:45:32.758 cpu : usr=97.28%, sys=2.30%, ctx=10, majf=0, minf=71 00:45:32.758 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:32.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.758 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.758 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:32.758 filename1: (groupid=0, jobs=1): err= 0: pid=2315049: Mon Dec 9 10:56:15 2024 00:45:32.758 read: IOPS=183, BW=736KiB/s (753kB/s)(7360KiB/10005msec) 00:45:32.758 slat (nsec): min=8787, max=96673, avg=22137.00, stdev=8547.76 00:45:32.758 clat (usec): min=607, max=43539, avg=21680.46, stdev=20438.43 00:45:32.758 lat (usec): min=617, max=43571, avg=21702.60, stdev=20438.41 00:45:32.758 clat percentiles (usec): 00:45:32.758 | 1.00th=[ 676], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1352], 00:45:32.758 | 30.00th=[ 1418], 40.00th=[ 1467], 50.00th=[ 2180], 60.00th=[41681], 00:45:32.758 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:45:32.758 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:45:32.758 | 99.99th=[43779] 00:45:32.758 bw ( KiB/s): min= 704, max= 768, per=64.74%, avg=734.40, stdev=31.96, samples=20 00:45:32.758 iops : min= 176, max= 192, avg=183.60, stdev= 7.99, samples=20 00:45:32.758 lat (usec) : 750=2.01%, 1000=1.25% 00:45:32.758 lat (msec) : 2=46.52%, 4=0.43%, 50=49.78% 00:45:32.758 cpu : usr=95.89%, sys=3.52%, ctx=22, majf=0, minf=47 00:45:32.758 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:32.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.758 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.758 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:32.758 00:45:32.758 Run status group 0 (all jobs): 00:45:32.758 READ: bw=1134KiB/s (1161kB/s), 398KiB/s-736KiB/s (408kB/s-753kB/s), io=11.1MiB (11.6MB), run=10005-10005msec 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 00:45:32.758 real 0m12.008s 00:45:32.758 user 0m21.351s 00:45:32.758 sys 0m1.069s 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 ************************************ 00:45:32.758 END TEST fio_dif_1_multi_subsystems 00:45:32.758 ************************************ 00:45:32.758 10:56:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:32.758 10:56:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:32.758 10:56:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 ************************************ 00:45:32.758 START TEST fio_dif_rand_params 00:45:32.758 ************************************ 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 bdev_null0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.758 [2024-12-09 10:56:16.268189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:32.758 { 00:45:32.758 "params": { 00:45:32.758 "name": "Nvme$subsystem", 00:45:32.758 "trtype": "$TEST_TRANSPORT", 00:45:32.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:32.758 "adrfam": "ipv4", 00:45:32.758 "trsvcid": "$NVMF_PORT", 00:45:32.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:32.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:32.758 "hdgst": ${hdgst:-false}, 00:45:32.758 "ddgst": ${ddgst:-false} 00:45:32.758 }, 00:45:32.758 "method": "bdev_nvme_attach_controller" 00:45:32.758 } 00:45:32.758 EOF 00:45:32.758 )") 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:32.758 10:56:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:32.759 "params": { 00:45:32.759 "name": "Nvme0", 00:45:32.759 "trtype": "tcp", 00:45:32.759 "traddr": "10.0.0.2", 00:45:32.759 "adrfam": "ipv4", 00:45:32.759 "trsvcid": "4420", 00:45:32.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:32.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:32.759 "hdgst": false, 00:45:32.759 "ddgst": false 00:45:32.759 }, 00:45:32.759 "method": "bdev_nvme_attach_controller" 00:45:32.759 }' 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:32.759 10:56:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.759 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:32.759 ... 00:45:32.759 fio-3.35 00:45:32.759 Starting 3 threads 00:45:38.026 00:45:38.026 filename0: (groupid=0, jobs=1): err= 0: pid=2316441: Mon Dec 9 10:56:22 2024 00:45:38.026 read: IOPS=106, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5010msec) 00:45:38.026 slat (nsec): min=10304, max=34851, avg=19406.33, stdev=3817.66 00:45:38.027 clat (usec): min=10731, max=71639, avg=28221.33, stdev=7692.04 00:45:38.027 lat (usec): min=10748, max=71658, avg=28240.73, stdev=7693.00 00:45:38.027 clat percentiles (usec): 00:45:38.027 | 1.00th=[14746], 5.00th=[17433], 10.00th=[19006], 20.00th=[21365], 00:45:38.027 | 30.00th=[22938], 40.00th=[25822], 50.00th=[27919], 60.00th=[30016], 00:45:38.027 | 70.00th=[32113], 80.00th=[34341], 90.00th=[38011], 95.00th=[40109], 00:45:38.027 | 99.00th=[56361], 99.50th=[58983], 99.90th=[71828], 99.95th=[71828], 00:45:38.027 | 99.99th=[71828] 00:45:38.027 bw ( KiB/s): min=11008, max=16128, per=35.51%, avg=13542.40, stdev=1758.98, samples=10 00:45:38.027 iops : min= 86, max= 126, avg=105.80, stdev=13.74, samples=10 00:45:38.027 lat (msec) : 20=13.91%, 50=84.96%, 100=1.13% 00:45:38.027 cpu : usr=94.97%, sys=4.49%, ctx=7, majf=0, minf=111 00:45:38.027 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.027 filename0: (groupid=0, jobs=1): err= 0: pid=2316442: Mon Dec 9 10:56:22 2024 00:45:38.027 read: IOPS=104, BW=13.1MiB/s (13.7MB/s)(66.1MiB/5048msec) 00:45:38.027 slat (nsec): min=10211, max=81386, avg=20104.94, stdev=4599.81 00:45:38.027 clat (usec): min=11362, max=65631, avg=28517.43, stdev=6983.54 00:45:38.027 lat (usec): min=11380, max=65652, avg=28537.53, stdev=6983.48 00:45:38.027 clat percentiles (usec): 00:45:38.027 | 1.00th=[14484], 5.00th=[19006], 10.00th=[21103], 20.00th=[22152], 00:45:38.027 | 30.00th=[23987], 40.00th=[26346], 50.00th=[28443], 60.00th=[30278], 00:45:38.027 | 70.00th=[31851], 80.00th=[33424], 90.00th=[36439], 95.00th=[40109], 00:45:38.027 | 99.00th=[44827], 99.50th=[61080], 99.90th=[65799], 99.95th=[65799], 00:45:38.027 | 99.99th=[65799] 00:45:38.027 bw ( KiB/s): min=12032, max=15104, per=35.31%, avg=13465.60, stdev=966.94, samples=10 00:45:38.027 iops : min= 94, max= 118, avg=105.20, stdev= 7.55, samples=10 00:45:38.027 lat (msec) : 20=7.18%, 50=91.87%, 100=0.95% 00:45:38.027 cpu : usr=95.09%, sys=4.34%, ctx=13, majf=0, minf=116 00:45:38.027 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 issued rwts: total=529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.027 filename0: (groupid=0, jobs=1): err= 0: pid=2316443: Mon Dec 9 10:56:22 2024 00:45:38.027 read: IOPS=87, BW=11.0MiB/s (11.5MB/s)(55.4MiB/5043msec) 00:45:38.027 slat (nsec): min=6250, max=71124, avg=26326.93, stdev=10356.16 00:45:38.027 clat (usec): min=11207, max=74515, avg=34105.73, stdev=18532.27 00:45:38.027 lat (usec): min=11224, max=74533, avg=34132.05, stdev=18531.30 00:45:38.027 clat percentiles (usec): 00:45:38.027 | 1.00th=[12911], 5.00th=[17695], 10.00th=[19530], 20.00th=[21365], 00:45:38.027 | 30.00th=[23200], 40.00th=[25035], 50.00th=[26084], 60.00th=[27395], 00:45:38.027 | 70.00th=[29492], 80.00th=[63701], 90.00th=[67634], 95.00th=[68682], 00:45:38.027 | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:45:38.027 | 99.99th=[74974] 00:45:38.027 bw ( KiB/s): min= 6912, max=17920, per=29.54%, avg=11264.00, stdev=3812.40, samples=10 00:45:38.027 iops : min= 54, max= 140, avg=88.00, stdev=29.78, samples=10 00:45:38.027 lat (msec) : 20=14.00%, 50=62.30%, 100=23.70% 00:45:38.027 cpu : usr=92.66%, sys=5.53%, ctx=72, majf=0, minf=93 00:45:38.027 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.027 issued rwts: total=443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.027 00:45:38.027 Run status group 0 (all jobs): 00:45:38.027 READ: bw=37.2MiB/s (39.1MB/s), 11.0MiB/s-13.3MiB/s (11.5MB/s-13.9MB/s), io=188MiB (197MB), run=5010-5048msec 00:45:38.594 10:56:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:38.594 10:56:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:38.594 10:56:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:38.594 10:56:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:38.594 10:56:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 bdev_null0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 [2024-12-09 10:56:23.066403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 bdev_null1 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 bdev_null2 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.594 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:38.595 { 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme$subsystem", 00:45:38.595 "trtype": "$TEST_TRANSPORT", 00:45:38.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "$NVMF_PORT", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:38.595 "hdgst": ${hdgst:-false}, 00:45:38.595 "ddgst": ${ddgst:-false} 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 } 00:45:38.595 EOF 00:45:38.595 )") 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:38.595 { 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme$subsystem", 00:45:38.595 "trtype": "$TEST_TRANSPORT", 00:45:38.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "$NVMF_PORT", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:38.595 "hdgst": ${hdgst:-false}, 00:45:38.595 "ddgst": ${ddgst:-false} 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 } 00:45:38.595 EOF 00:45:38.595 )") 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:38.595 { 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme$subsystem", 00:45:38.595 "trtype": "$TEST_TRANSPORT", 00:45:38.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "$NVMF_PORT", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:38.595 "hdgst": ${hdgst:-false}, 00:45:38.595 "ddgst": ${ddgst:-false} 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 } 00:45:38.595 EOF 00:45:38.595 )") 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme0", 00:45:38.595 "trtype": "tcp", 00:45:38.595 "traddr": "10.0.0.2", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "4420", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:38.595 "hdgst": false, 00:45:38.595 "ddgst": false 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 },{ 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme1", 00:45:38.595 "trtype": "tcp", 00:45:38.595 "traddr": "10.0.0.2", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "4420", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:38.595 "hdgst": false, 00:45:38.595 "ddgst": false 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 },{ 00:45:38.595 "params": { 00:45:38.595 "name": "Nvme2", 00:45:38.595 "trtype": "tcp", 00:45:38.595 "traddr": "10.0.0.2", 00:45:38.595 "adrfam": "ipv4", 00:45:38.595 "trsvcid": "4420", 00:45:38.595 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:38.595 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:38.595 "hdgst": false, 00:45:38.595 "ddgst": false 00:45:38.595 }, 00:45:38.595 "method": "bdev_nvme_attach_controller" 00:45:38.595 }' 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:38.595 10:56:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:38.855 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:38.855 ... 00:45:38.855 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:38.855 ... 00:45:38.855 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:38.855 ... 00:45:38.855 fio-3.35 00:45:38.855 Starting 24 threads 00:45:51.072 00:45:51.072 filename0: (groupid=0, jobs=1): err= 0: pid=2317285: Mon Dec 9 10:56:34 2024 00:45:51.072 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10032msec) 00:45:51.072 slat (nsec): min=8437, max=99821, avg=33038.77, stdev=11918.41 00:45:51.072 clat (msec): min=24, max=107, avg=37.86, stdev= 9.29 00:45:51.072 lat (msec): min=24, max=107, avg=37.90, stdev= 9.29 00:45:51.072 clat percentiles (msec): 00:45:51.072 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.072 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.072 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.072 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 93], 99.95th=[ 105], 00:45:51.072 | 99.99th=[ 108] 00:45:51.072 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.45, stdev=307.72, samples=20 00:45:51.072 iops : min= 224, max= 480, avg=419.10, stdev=76.93, samples=20 00:45:51.072 lat (msec) : 50=93.30%, 100=6.61%, 250=0.10% 00:45:51.072 cpu : usr=98.19%, sys=1.32%, ctx=48, majf=0, minf=9 00:45:51.072 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317286: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10032msec) 00:45:51.073 slat (usec): min=9, max=129, avg=72.89, stdev=17.21 00:45:51.073 clat (msec): min=20, max=103, avg=37.50, stdev= 9.27 00:45:51.073 lat (msec): min=20, max=103, avg=37.57, stdev= 9.27 00:45:51.073 clat percentiles (msec): 00:45:51.073 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.073 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:45:51.073 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 66], 00:45:51.073 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 93], 99.95th=[ 104], 00:45:51.073 | 99.99th=[ 104] 00:45:51.073 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.45, stdev=307.72, samples=20 00:45:51.073 iops : min= 224, max= 480, avg=419.10, stdev=76.93, samples=20 00:45:51.073 lat (msec) : 50=93.30%, 100=6.61%, 250=0.10% 00:45:51.073 cpu : usr=98.42%, sys=1.08%, ctx=19, majf=0, minf=9 00:45:51.073 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317287: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=418, BW=1675KiB/s (1715kB/s)(16.4MiB/10012msec) 00:45:51.073 slat (usec): min=10, max=122, avg=32.61, stdev=10.18 00:45:51.073 clat (msec): min=17, max=101, avg=37.91, stdev= 9.78 00:45:51.073 lat (msec): min=17, max=101, avg=37.94, stdev= 9.79 00:45:51.073 clat percentiles (msec): 00:45:51.073 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.073 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.073 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.073 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 102], 99.95th=[ 102], 00:45:51.073 | 99.99th=[ 102] 00:45:51.073 bw ( KiB/s): min= 753, max= 1920, per=4.18%, avg=1670.79, stdev=337.47, samples=19 00:45:51.073 iops : min= 188, max= 480, avg=417.68, stdev=84.40, samples=19 00:45:51.073 lat (msec) : 20=0.38%, 50=92.80%, 100=6.39%, 250=0.43% 00:45:51.073 cpu : usr=96.95%, sys=1.93%, ctx=206, majf=0, minf=9 00:45:51.073 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317288: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10009msec) 00:45:51.073 slat (usec): min=8, max=122, avg=37.57, stdev=15.41 00:45:51.073 clat (msec): min=26, max=152, avg=38.01, stdev=11.13 00:45:51.073 lat (msec): min=26, max=152, avg=38.04, stdev=11.14 00:45:51.073 clat percentiles (msec): 00:45:51.073 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.073 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.073 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.073 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 153], 99.95th=[ 153], 00:45:51.073 | 99.99th=[ 153] 00:45:51.073 bw ( KiB/s): min= 641, max= 1920, per=4.17%, avg=1664.05, stdev=351.68, samples=19 00:45:51.073 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.073 lat (msec) : 50=93.49%, 100=6.08%, 250=0.43% 00:45:51.073 cpu : usr=98.38%, sys=1.09%, ctx=33, majf=0, minf=9 00:45:51.073 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317289: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10032msec) 00:45:51.073 slat (usec): min=8, max=120, avg=22.90, stdev=14.25 00:45:51.073 clat (msec): min=19, max=107, avg=37.96, stdev= 9.19 00:45:51.073 lat (msec): min=19, max=107, avg=37.98, stdev= 9.19 00:45:51.073 clat percentiles (msec): 00:45:51.073 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:45:51.073 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.073 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.073 | 99.00th=[ 80], 99.50th=[ 80], 99.90th=[ 92], 99.95th=[ 93], 00:45:51.073 | 99.99th=[ 108] 00:45:51.073 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.45, stdev=307.72, samples=20 00:45:51.073 iops : min= 224, max= 480, avg=419.10, stdev=76.93, samples=20 00:45:51.073 lat (msec) : 20=0.05%, 50=93.20%, 100=6.70%, 250=0.05% 00:45:51.073 cpu : usr=97.02%, sys=1.88%, ctx=177, majf=0, minf=9 00:45:51.073 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317290: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10007msec) 00:45:51.073 slat (nsec): min=8267, max=93993, avg=24976.42, stdev=12911.61 00:45:51.073 clat (msec): min=27, max=113, avg=38.15, stdev=10.01 00:45:51.073 lat (msec): min=27, max=114, avg=38.17, stdev=10.02 00:45:51.073 clat percentiles (msec): 00:45:51.073 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:45:51.073 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.073 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.073 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 114], 99.95th=[ 114], 00:45:51.073 | 99.99th=[ 114] 00:45:51.073 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=333.24, samples=19 00:45:51.073 iops : min= 160, max= 480, avg=416.00, stdev=83.31, samples=19 00:45:51.073 lat (msec) : 50=93.06%, 100=6.51%, 250=0.43% 00:45:51.073 cpu : usr=98.22%, sys=1.35%, ctx=15, majf=0, minf=9 00:45:51.073 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.073 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.073 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.073 filename0: (groupid=0, jobs=1): err= 0: pid=2317291: Mon Dec 9 10:56:34 2024 00:45:51.073 read: IOPS=418, BW=1674KiB/s (1714kB/s)(16.4MiB/10016msec) 00:45:51.073 slat (usec): min=8, max=125, avg=32.68, stdev=12.20 00:45:51.074 clat (msec): min=17, max=105, avg=37.93, stdev= 9.85 00:45:51.074 lat (msec): min=17, max=105, avg=37.96, stdev= 9.86 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 106], 99.95th=[ 106], 00:45:51.074 | 99.99th=[ 106] 00:45:51.074 bw ( KiB/s): min= 752, max= 1920, per=4.18%, avg=1670.74, stdev=334.91, samples=19 00:45:51.074 iops : min= 188, max= 480, avg=417.68, stdev=83.73, samples=19 00:45:51.074 lat (msec) : 20=0.38%, 50=92.80%, 100=6.39%, 250=0.43% 00:45:51.074 cpu : usr=98.14%, sys=1.40%, ctx=26, majf=0, minf=9 00:45:51.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename0: (groupid=0, jobs=1): err= 0: pid=2317293: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10013msec) 00:45:51.074 slat (nsec): min=5646, max=75824, avg=34888.54, stdev=10211.03 00:45:51.074 clat (msec): min=26, max=155, avg=38.06, stdev=11.38 00:45:51.074 lat (msec): min=26, max=156, avg=38.10, stdev=11.38 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 80], 99.90th=[ 157], 99.95th=[ 157], 00:45:51.074 | 99.99th=[ 157] 00:45:51.074 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=351.84, samples=19 00:45:51.074 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.074 lat (msec) : 50=93.53%, 100=6.03%, 250=0.43% 00:45:51.074 cpu : usr=98.05%, sys=1.37%, ctx=53, majf=0, minf=9 00:45:51.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename1: (groupid=0, jobs=1): err= 0: pid=2317294: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10010msec) 00:45:51.074 slat (usec): min=9, max=123, avg=41.11, stdev=19.27 00:45:51.074 clat (msec): min=26, max=178, avg=37.98, stdev=11.28 00:45:51.074 lat (msec): min=26, max=178, avg=38.02, stdev=11.29 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 65], 00:45:51.074 | 99.00th=[ 79], 99.50th=[ 81], 99.90th=[ 153], 99.95th=[ 153], 00:45:51.074 | 99.99th=[ 180] 00:45:51.074 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=351.84, samples=19 00:45:51.074 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.074 lat (msec) : 50=93.53%, 100=6.03%, 250=0.43% 00:45:51.074 cpu : usr=96.64%, sys=2.03%, ctx=217, majf=0, minf=9 00:45:51.074 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename1: (groupid=0, jobs=1): err= 0: pid=2317295: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=416, BW=1667KiB/s (1707kB/s)(16.3MiB/10016msec) 00:45:51.074 slat (nsec): min=5303, max=99624, avg=34247.14, stdev=12126.27 00:45:51.074 clat (msec): min=26, max=159, avg=38.12, stdev=11.58 00:45:51.074 lat (msec): min=26, max=159, avg=38.16, stdev=11.58 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 96], 99.90th=[ 161], 99.95th=[ 161], 00:45:51.074 | 99.99th=[ 161] 00:45:51.074 bw ( KiB/s): min= 625, max= 1904, per=4.16%, avg=1663.21, stdev=350.60, samples=19 00:45:51.074 iops : min= 156, max= 476, avg=415.79, stdev=87.69, samples=19 00:45:51.074 lat (msec) : 50=93.58%, 100=5.97%, 250=0.46% 00:45:51.074 cpu : usr=98.24%, sys=1.29%, ctx=12, majf=0, minf=9 00:45:51.074 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename1: (groupid=0, jobs=1): err= 0: pid=2317296: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10010msec) 00:45:51.074 slat (usec): min=6, max=131, avg=34.92, stdev=16.96 00:45:51.074 clat (msec): min=33, max=117, avg=38.03, stdev=10.06 00:45:51.074 lat (msec): min=33, max=117, avg=38.06, stdev=10.07 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 117], 99.95th=[ 117], 00:45:51.074 | 99.99th=[ 117] 00:45:51.074 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=333.24, samples=19 00:45:51.074 iops : min= 160, max= 480, avg=416.00, stdev=83.31, samples=19 00:45:51.074 lat (msec) : 50=93.15%, 100=6.42%, 250=0.43% 00:45:51.074 cpu : usr=97.43%, sys=1.73%, ctx=171, majf=0, minf=10 00:45:51.074 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename1: (groupid=0, jobs=1): err= 0: pid=2317297: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10007msec) 00:45:51.074 slat (usec): min=5, max=103, avg=33.43, stdev=10.43 00:45:51.074 clat (msec): min=33, max=113, avg=38.03, stdev=10.03 00:45:51.074 lat (msec): min=33, max=113, avg=38.06, stdev=10.03 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 114], 99.95th=[ 114], 00:45:51.074 | 99.99th=[ 114] 00:45:51.074 bw ( KiB/s): min= 641, max= 1920, per=4.17%, avg=1664.05, stdev=333.07, samples=19 00:45:51.074 iops : min= 160, max= 480, avg=416.00, stdev=83.31, samples=19 00:45:51.074 lat (msec) : 50=93.15%, 100=6.47%, 250=0.38% 00:45:51.074 cpu : usr=96.79%, sys=2.15%, ctx=123, majf=0, minf=9 00:45:51.074 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.074 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.074 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.074 filename1: (groupid=0, jobs=1): err= 0: pid=2317298: Mon Dec 9 10:56:34 2024 00:45:51.074 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10018msec) 00:45:51.074 slat (usec): min=9, max=139, avg=74.30, stdev=18.64 00:45:51.074 clat (msec): min=17, max=107, avg=37.62, stdev= 9.95 00:45:51.074 lat (msec): min=17, max=107, avg=37.69, stdev= 9.95 00:45:51.074 clat percentiles (msec): 00:45:51.074 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.074 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:45:51.074 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.074 | 99.00th=[ 80], 99.50th=[ 88], 99.90th=[ 108], 99.95th=[ 108], 00:45:51.074 | 99.99th=[ 108] 00:45:51.074 bw ( KiB/s): min= 640, max= 1920, per=4.18%, avg=1670.74, stdev=337.24, samples=19 00:45:51.074 iops : min= 160, max= 480, avg=417.68, stdev=84.31, samples=19 00:45:51.074 lat (msec) : 20=0.14%, 50=92.97%, 100=6.41%, 250=0.48% 00:45:51.074 cpu : usr=94.68%, sys=2.96%, ctx=865, majf=0, minf=9 00:45:51.074 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename1: (groupid=0, jobs=1): err= 0: pid=2317299: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10033msec) 00:45:51.075 slat (usec): min=8, max=131, avg=35.57, stdev=11.33 00:45:51.075 clat (usec): min=24618, max=93625, avg=37824.93, stdev=9123.46 00:45:51.075 lat (usec): min=24641, max=93663, avg=37860.50, stdev=9124.66 00:45:51.075 clat percentiles (usec): 00:45:51.075 | 1.00th=[33424], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:45:51.075 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[35390], 00:45:51.075 | 70.00th=[37487], 80.00th=[38011], 90.00th=[39060], 95.00th=[66323], 00:45:51.075 | 99.00th=[79168], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:45:51.075 | 99.99th=[93848] 00:45:51.075 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.45, stdev=307.72, samples=20 00:45:51.075 iops : min= 224, max= 480, avg=419.10, stdev=76.93, samples=20 00:45:51.075 lat (msec) : 50=93.20%, 100=6.80% 00:45:51.075 cpu : usr=96.88%, sys=1.86%, ctx=155, majf=0, minf=9 00:45:51.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename1: (groupid=0, jobs=1): err= 0: pid=2317300: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=416, BW=1667KiB/s (1707kB/s)(16.4MiB/10064msec) 00:45:51.075 slat (nsec): min=8245, max=99160, avg=33847.17, stdev=11481.58 00:45:51.075 clat (msec): min=19, max=104, avg=37.89, stdev= 9.23 00:45:51.075 lat (msec): min=19, max=104, avg=37.93, stdev= 9.23 00:45:51.075 clat percentiles (msec): 00:45:51.075 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.075 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.075 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.075 | 99.00th=[ 80], 99.50th=[ 80], 99.90th=[ 81], 99.95th=[ 93], 00:45:51.075 | 99.99th=[ 105] 00:45:51.075 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.80, stdev=310.49, samples=20 00:45:51.075 iops : min= 224, max= 480, avg=419.20, stdev=77.62, samples=20 00:45:51.075 lat (msec) : 20=0.05%, 50=93.09%, 100=6.82%, 250=0.05% 00:45:51.075 cpu : usr=97.27%, sys=1.84%, ctx=145, majf=0, minf=9 00:45:51.075 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename1: (groupid=0, jobs=1): err= 0: pid=2317301: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=419, BW=1677KiB/s (1717kB/s)(16.4MiB/10001msec) 00:45:51.075 slat (usec): min=8, max=136, avg=57.89, stdev=29.00 00:45:51.075 clat (msec): min=23, max=105, avg=37.67, stdev= 9.27 00:45:51.075 lat (msec): min=23, max=105, avg=37.73, stdev= 9.27 00:45:51.075 clat percentiles (msec): 00:45:51.075 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.075 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.075 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.075 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 92], 99.95th=[ 104], 00:45:51.075 | 99.99th=[ 106] 00:45:51.075 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1677.47, stdev=316.12, samples=19 00:45:51.075 iops : min= 224, max= 480, avg=419.37, stdev=79.03, samples=19 00:45:51.075 lat (msec) : 50=93.27%, 100=6.63%, 250=0.10% 00:45:51.075 cpu : usr=97.18%, sys=1.81%, ctx=113, majf=0, minf=9 00:45:51.075 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename2: (groupid=0, jobs=1): err= 0: pid=2317302: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=417, BW=1668KiB/s (1708kB/s)(16.3MiB/10012msec) 00:45:51.075 slat (nsec): min=7396, max=76822, avg=34980.76, stdev=8769.95 00:45:51.075 clat (msec): min=26, max=181, avg=38.06, stdev=11.46 00:45:51.075 lat (msec): min=26, max=181, avg=38.09, stdev=11.45 00:45:51.075 clat percentiles (msec): 00:45:51.075 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.075 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.075 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.075 | 99.00th=[ 80], 99.50th=[ 80], 99.90th=[ 157], 99.95th=[ 157], 00:45:51.075 | 99.99th=[ 182] 00:45:51.075 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=351.84, samples=19 00:45:51.075 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.075 lat (msec) : 50=93.58%, 100=6.03%, 250=0.38% 00:45:51.075 cpu : usr=98.03%, sys=1.40%, ctx=41, majf=0, minf=9 00:45:51.075 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename2: (groupid=0, jobs=1): err= 0: pid=2317303: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=419, BW=1677KiB/s (1718kB/s)(16.4MiB/10011msec) 00:45:51.075 slat (usec): min=8, max=127, avg=33.23, stdev=13.70 00:45:51.075 clat (msec): min=22, max=119, avg=37.89, stdev=10.10 00:45:51.075 lat (msec): min=22, max=119, avg=37.92, stdev=10.10 00:45:51.075 clat percentiles (msec): 00:45:51.075 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.075 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.075 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.075 | 99.00th=[ 80], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 105], 00:45:51.075 | 99.99th=[ 121] 00:45:51.075 bw ( KiB/s): min= 769, max= 1904, per=4.19%, avg=1673.32, stdev=324.36, samples=19 00:45:51.075 iops : min= 192, max= 476, avg=418.32, stdev=81.13, samples=19 00:45:51.075 lat (msec) : 50=93.52%, 100=5.96%, 250=0.52% 00:45:51.075 cpu : usr=98.35%, sys=1.20%, ctx=14, majf=0, minf=9 00:45:51.075 IO depths : 1=0.1%, 2=5.8%, 4=22.7%, 8=58.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=93.9%, 8=1.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename2: (groupid=0, jobs=1): err= 0: pid=2317304: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10010msec) 00:45:51.075 slat (nsec): min=8503, max=99744, avg=29576.74, stdev=14163.05 00:45:51.075 clat (msec): min=26, max=152, avg=38.12, stdev=11.19 00:45:51.075 lat (msec): min=26, max=152, avg=38.15, stdev=11.20 00:45:51.075 clat percentiles (msec): 00:45:51.075 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.075 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.075 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.075 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 153], 99.95th=[ 153], 00:45:51.075 | 99.99th=[ 153] 00:45:51.075 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=351.84, samples=19 00:45:51.075 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.075 lat (msec) : 50=93.53%, 100=6.03%, 250=0.43% 00:45:51.075 cpu : usr=98.04%, sys=1.50%, ctx=17, majf=0, minf=9 00:45:51.075 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.075 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.075 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.075 filename2: (groupid=0, jobs=1): err= 0: pid=2317306: Mon Dec 9 10:56:34 2024 00:45:51.075 read: IOPS=419, BW=1679KiB/s (1720kB/s)(16.4MiB/10023msec) 00:45:51.075 slat (usec): min=8, max=143, avg=32.60, stdev=34.28 00:45:51.076 clat (usec): min=23774, max=98741, avg=37813.48, stdev=8839.52 00:45:51.076 lat (usec): min=23845, max=98812, avg=37846.08, stdev=8854.75 00:45:51.076 clat percentiles (usec): 00:45:51.076 | 1.00th=[32900], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:51.076 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[35390], 00:45:51.076 | 70.00th=[37487], 80.00th=[38011], 90.00th=[39060], 95.00th=[64750], 00:45:51.076 | 99.00th=[78119], 99.50th=[79168], 99.90th=[80217], 99.95th=[89654], 00:45:51.076 | 99.99th=[99091] 00:45:51.076 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.80, stdev=304.89, samples=20 00:45:51.076 iops : min= 224, max= 480, avg=419.20, stdev=76.22, samples=20 00:45:51.076 lat (msec) : 50=93.16%, 100=6.84% 00:45:51.076 cpu : usr=97.99%, sys=1.32%, ctx=57, majf=0, minf=9 00:45:51.076 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.076 filename2: (groupid=0, jobs=1): err= 0: pid=2317307: Mon Dec 9 10:56:34 2024 00:45:51.076 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10011msec) 00:45:51.076 slat (nsec): min=8567, max=69847, avg=33230.49, stdev=9193.27 00:45:51.076 clat (msec): min=26, max=180, avg=38.08, stdev=11.40 00:45:51.076 lat (msec): min=26, max=180, avg=38.11, stdev=11.40 00:45:51.076 clat percentiles (msec): 00:45:51.076 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.076 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.076 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 66], 00:45:51.076 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 155], 99.95th=[ 155], 00:45:51.076 | 99.99th=[ 182] 00:45:51.076 bw ( KiB/s): min= 641, max= 1920, per=4.17%, avg=1664.05, stdev=351.68, samples=19 00:45:51.076 iops : min= 160, max= 480, avg=416.00, stdev=87.96, samples=19 00:45:51.076 lat (msec) : 50=93.53%, 100=6.03%, 250=0.43% 00:45:51.076 cpu : usr=97.07%, sys=1.82%, ctx=216, majf=0, minf=9 00:45:51.076 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.076 filename2: (groupid=0, jobs=1): err= 0: pid=2317308: Mon Dec 9 10:56:34 2024 00:45:51.076 read: IOPS=416, BW=1667KiB/s (1707kB/s)(16.4MiB/10065msec) 00:45:51.076 slat (usec): min=8, max=111, avg=36.25, stdev=12.26 00:45:51.076 clat (msec): min=20, max=102, avg=37.86, stdev= 9.26 00:45:51.076 lat (msec): min=20, max=103, avg=37.90, stdev= 9.27 00:45:51.076 clat percentiles (msec): 00:45:51.076 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.076 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.076 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.076 | 99.00th=[ 79], 99.50th=[ 80], 99.90th=[ 94], 99.95th=[ 103], 00:45:51.076 | 99.99th=[ 104] 00:45:51.076 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.80, stdev=310.49, samples=20 00:45:51.076 iops : min= 224, max= 480, avg=419.20, stdev=77.62, samples=20 00:45:51.076 lat (msec) : 50=93.23%, 100=6.68%, 250=0.10% 00:45:51.076 cpu : usr=98.29%, sys=1.25%, ctx=17, majf=0, minf=9 00:45:51.076 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 issued rwts: total=4194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.076 filename2: (groupid=0, jobs=1): err= 0: pid=2317309: Mon Dec 9 10:56:34 2024 00:45:51.076 read: IOPS=419, BW=1678KiB/s (1718kB/s)(16.4MiB/10031msec) 00:45:51.076 slat (nsec): min=7044, max=84395, avg=33414.97, stdev=11450.78 00:45:51.076 clat (msec): min=23, max=104, avg=37.83, stdev= 9.21 00:45:51.076 lat (msec): min=23, max=104, avg=37.87, stdev= 9.20 00:45:51.076 clat percentiles (msec): 00:45:51.076 | 1.00th=[ 34], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.076 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.076 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.076 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 95], 99.95th=[ 105], 00:45:51.076 | 99.99th=[ 105] 00:45:51.076 bw ( KiB/s): min= 896, max= 1920, per=4.20%, avg=1676.80, stdev=304.89, samples=20 00:45:51.076 iops : min= 224, max= 480, avg=419.20, stdev=76.22, samples=20 00:45:51.076 lat (msec) : 50=93.58%, 100=6.32%, 250=0.10% 00:45:51.076 cpu : usr=97.22%, sys=1.89%, ctx=95, majf=0, minf=9 00:45:51.076 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:51.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.076 filename2: (groupid=0, jobs=1): err= 0: pid=2317310: Mon Dec 9 10:56:34 2024 00:45:51.076 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10006msec) 00:45:51.076 slat (usec): min=10, max=130, avg=52.07, stdev=25.44 00:45:51.076 clat (msec): min=32, max=112, avg=37.89, stdev= 9.98 00:45:51.076 lat (msec): min=32, max=113, avg=37.94, stdev= 9.98 00:45:51.076 clat percentiles (msec): 00:45:51.076 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:51.076 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:45:51.076 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 67], 00:45:51.076 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 113], 99.95th=[ 113], 00:45:51.076 | 99.99th=[ 113] 00:45:51.076 bw ( KiB/s): min= 640, max= 1920, per=4.16%, avg=1664.00, stdev=333.24, samples=19 00:45:51.076 iops : min= 160, max= 480, avg=416.00, stdev=83.31, samples=19 00:45:51.076 lat (msec) : 50=93.15%, 100=6.47%, 250=0.38% 00:45:51.076 cpu : usr=97.73%, sys=1.47%, ctx=64, majf=0, minf=9 00:45:51.076 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:51.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:51.076 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:51.076 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:51.076 00:45:51.076 Run status group 0 (all jobs): 00:45:51.076 READ: bw=39.0MiB/s (40.9MB/s), 1667KiB/s-1679KiB/s (1707kB/s-1720kB/s), io=393MiB (412MB), run=10001-10065msec 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.076 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 bdev_null0 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 [2024-12-09 10:56:35.387279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 bdev_null1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:51.077 { 00:45:51.077 "params": { 00:45:51.077 "name": "Nvme$subsystem", 00:45:51.077 "trtype": "$TEST_TRANSPORT", 00:45:51.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:51.077 "adrfam": "ipv4", 00:45:51.077 "trsvcid": "$NVMF_PORT", 00:45:51.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:51.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:51.077 "hdgst": ${hdgst:-false}, 00:45:51.077 "ddgst": ${ddgst:-false} 00:45:51.077 }, 00:45:51.077 "method": "bdev_nvme_attach_controller" 00:45:51.077 } 00:45:51.077 EOF 00:45:51.077 )") 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:51.077 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:51.077 { 00:45:51.077 "params": { 00:45:51.077 "name": "Nvme$subsystem", 00:45:51.077 "trtype": "$TEST_TRANSPORT", 00:45:51.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:51.077 "adrfam": "ipv4", 00:45:51.077 "trsvcid": "$NVMF_PORT", 00:45:51.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:51.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:51.078 "hdgst": ${hdgst:-false}, 00:45:51.078 "ddgst": ${ddgst:-false} 00:45:51.078 }, 00:45:51.078 "method": "bdev_nvme_attach_controller" 00:45:51.078 } 00:45:51.078 EOF 00:45:51.078 )") 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:51.078 "params": { 00:45:51.078 "name": "Nvme0", 00:45:51.078 "trtype": "tcp", 00:45:51.078 "traddr": "10.0.0.2", 00:45:51.078 "adrfam": "ipv4", 00:45:51.078 "trsvcid": "4420", 00:45:51.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:51.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:51.078 "hdgst": false, 00:45:51.078 "ddgst": false 00:45:51.078 }, 00:45:51.078 "method": "bdev_nvme_attach_controller" 00:45:51.078 },{ 00:45:51.078 "params": { 00:45:51.078 "name": "Nvme1", 00:45:51.078 "trtype": "tcp", 00:45:51.078 "traddr": "10.0.0.2", 00:45:51.078 "adrfam": "ipv4", 00:45:51.078 "trsvcid": "4420", 00:45:51.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:51.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:51.078 "hdgst": false, 00:45:51.078 "ddgst": false 00:45:51.078 }, 00:45:51.078 "method": "bdev_nvme_attach_controller" 00:45:51.078 }' 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:51.078 10:56:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:51.347 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:51.347 ... 00:45:51.347 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:51.347 ... 00:45:51.347 fio-3.35 00:45:51.347 Starting 4 threads 00:45:57.917 00:45:57.917 filename0: (groupid=0, jobs=1): err= 0: pid=2318596: Mon Dec 9 10:56:41 2024 00:45:57.917 read: IOPS=817, BW=6543KiB/s (6700kB/s)(32.0MiB/5003msec) 00:45:57.917 slat (nsec): min=5890, max=78084, avg=16496.91, stdev=10310.45 00:45:57.917 clat (usec): min=1953, max=19400, avg=9713.99, stdev=1871.20 00:45:57.917 lat (usec): min=1961, max=19408, avg=9730.49, stdev=1870.80 00:45:57.917 clat percentiles (usec): 00:45:57.917 | 1.00th=[ 4555], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8717], 00:45:57.917 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:45:57.917 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10683], 95.00th=[11731], 00:45:57.917 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19268], 99.95th=[19268], 00:45:57.917 | 99.99th=[19530] 00:45:57.917 bw ( KiB/s): min= 6096, max= 7152, per=24.47%, avg=6535.11, stdev=337.25, samples=9 00:45:57.917 iops : min= 762, max= 894, avg=816.89, stdev=42.16, samples=9 00:45:57.917 lat (msec) : 2=0.10%, 4=0.37%, 10=55.99%, 20=43.55% 00:45:57.917 cpu : usr=95.54%, sys=3.82%, ctx=51, majf=0, minf=0 00:45:57.917 IO depths : 1=0.5%, 2=20.0%, 4=54.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.917 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.917 issued rwts: total=4092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.917 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.917 filename0: (groupid=0, jobs=1): err= 0: pid=2318597: Mon Dec 9 10:56:41 2024 00:45:57.917 read: IOPS=847, BW=6783KiB/s (6946kB/s)(33.1MiB/5002msec) 00:45:57.917 slat (nsec): min=5416, max=81110, avg=15483.56, stdev=7464.76 00:45:57.917 clat (usec): min=2452, max=15149, avg=9371.47, stdev=1479.36 00:45:57.917 lat (usec): min=2470, max=15160, avg=9386.95, stdev=1478.67 00:45:57.917 clat percentiles (usec): 00:45:57.918 | 1.00th=[ 4555], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8356], 00:45:57.918 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[ 9896], 00:45:57.918 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:45:57.918 | 99.00th=[11994], 99.50th=[12649], 99.90th=[14091], 99.95th=[14484], 00:45:57.918 | 99.99th=[15139] 00:45:57.918 bw ( KiB/s): min= 6144, max= 7296, per=25.34%, avg=6767.78, stdev=418.51, samples=9 00:45:57.918 iops : min= 768, max= 912, avg=845.89, stdev=52.22, samples=9 00:45:57.918 lat (msec) : 4=0.14%, 10=61.78%, 20=38.08% 00:45:57.918 cpu : usr=95.66%, sys=3.82%, ctx=10, majf=0, minf=9 00:45:57.918 IO depths : 1=0.9%, 2=23.5%, 4=51.1%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 issued rwts: total=4241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.918 filename1: (groupid=0, jobs=1): err= 0: pid=2318598: Mon Dec 9 10:56:41 2024 00:45:57.918 read: IOPS=829, BW=6634KiB/s (6793kB/s)(32.4MiB/5002msec) 00:45:57.918 slat (nsec): min=5358, max=74782, avg=14463.06, stdev=6625.24 00:45:57.918 clat (usec): min=1296, max=19517, avg=9587.47, stdev=1867.36 00:45:57.918 lat (usec): min=1310, max=19532, avg=9601.93, stdev=1866.78 00:45:57.918 clat percentiles (usec): 00:45:57.918 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 7111], 20.00th=[ 8455], 00:45:57.918 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:45:57.918 | 70.00th=[10421], 80.00th=[10421], 90.00th=[10552], 95.00th=[11863], 00:45:57.918 | 99.00th=[15270], 99.50th=[16319], 99.90th=[18744], 99.95th=[18744], 00:45:57.918 | 99.99th=[19530] 00:45:57.918 bw ( KiB/s): min= 6080, max= 7280, per=24.86%, avg=6639.78, stdev=434.72, samples=9 00:45:57.918 iops : min= 760, max= 910, avg=829.89, stdev=54.26, samples=9 00:45:57.918 lat (msec) : 2=0.07%, 4=0.34%, 10=55.38%, 20=44.21% 00:45:57.918 cpu : usr=96.02%, sys=3.52%, ctx=7, majf=0, minf=9 00:45:57.918 IO depths : 1=0.7%, 2=21.0%, 4=53.5%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 issued rwts: total=4148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.918 filename1: (groupid=0, jobs=1): err= 0: pid=2318599: Mon Dec 9 10:56:41 2024 00:45:57.918 read: IOPS=843, BW=6751KiB/s (6913kB/s)(33.0MiB/5001msec) 00:45:57.918 slat (nsec): min=5645, max=80997, avg=14346.72, stdev=6938.80 00:45:57.918 clat (usec): min=1640, max=17427, avg=9421.45, stdev=1544.79 00:45:57.918 lat (usec): min=1654, max=17443, avg=9435.79, stdev=1544.00 00:45:57.918 clat percentiles (usec): 00:45:57.918 | 1.00th=[ 4555], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 8455], 00:45:57.918 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:45:57.918 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10552], 95.00th=[10814], 00:45:57.918 | 99.00th=[12911], 99.50th=[13829], 99.90th=[17171], 99.95th=[17171], 00:45:57.918 | 99.99th=[17433] 00:45:57.918 bw ( KiB/s): min= 6144, max= 7296, per=25.27%, avg=6748.44, stdev=404.52, samples=9 00:45:57.918 iops : min= 768, max= 912, avg=843.56, stdev=50.56, samples=9 00:45:57.918 lat (msec) : 2=0.05%, 4=0.14%, 10=59.36%, 20=40.45% 00:45:57.918 cpu : usr=93.60%, sys=4.60%, ctx=100, majf=0, minf=9 00:45:57.918 IO depths : 1=1.3%, 2=22.8%, 4=51.8%, 8=24.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.918 issued rwts: total=4220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:57.918 00:45:57.918 Run status group 0 (all jobs): 00:45:57.918 READ: bw=26.1MiB/s (27.3MB/s), 6543KiB/s-6783KiB/s (6700kB/s-6946kB/s), io=130MiB (137MB), run=5001-5003msec 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 00:45:57.918 real 0m26.014s 00:45:57.918 user 4m33.880s 00:45:57.918 sys 0m6.941s 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 ************************************ 00:45:57.918 END TEST fio_dif_rand_params 00:45:57.918 ************************************ 00:45:57.918 10:56:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:57.918 10:56:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:57.918 10:56:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 ************************************ 00:45:57.918 START TEST fio_dif_digest 00:45:57.918 ************************************ 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 bdev_null0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.918 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.918 [2024-12-09 10:56:42.364596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:57.919 { 00:45:57.919 "params": { 00:45:57.919 "name": "Nvme$subsystem", 00:45:57.919 "trtype": "$TEST_TRANSPORT", 00:45:57.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:57.919 "adrfam": "ipv4", 00:45:57.919 "trsvcid": "$NVMF_PORT", 00:45:57.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:57.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:57.919 "hdgst": ${hdgst:-false}, 00:45:57.919 "ddgst": ${ddgst:-false} 00:45:57.919 }, 00:45:57.919 "method": "bdev_nvme_attach_controller" 00:45:57.919 } 00:45:57.919 EOF 00:45:57.919 )") 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:57.919 "params": { 00:45:57.919 "name": "Nvme0", 00:45:57.919 "trtype": "tcp", 00:45:57.919 "traddr": "10.0.0.2", 00:45:57.919 "adrfam": "ipv4", 00:45:57.919 "trsvcid": "4420", 00:45:57.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:57.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:57.919 "hdgst": true, 00:45:57.919 "ddgst": true 00:45:57.919 }, 00:45:57.919 "method": "bdev_nvme_attach_controller" 00:45:57.919 }' 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:57.919 10:56:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:58.179 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:58.179 ... 00:45:58.179 fio-3.35 00:45:58.179 Starting 3 threads 00:46:10.379 00:46:10.379 filename0: (groupid=0, jobs=1): err= 0: pid=2319470: Mon Dec 9 10:56:53 2024 00:46:10.379 read: IOPS=94, BW=11.8MiB/s (12.3MB/s)(118MiB/10052msec) 00:46:10.379 slat (nsec): min=7820, max=54403, avg=18060.25, stdev=7692.75 00:46:10.379 clat (usec): min=14186, max=77365, avg=31836.08, stdev=6983.65 00:46:10.379 lat (usec): min=14201, max=77378, avg=31854.14, stdev=6983.94 00:46:10.379 clat percentiles (usec): 00:46:10.379 | 1.00th=[15401], 5.00th=[23725], 10.00th=[25297], 20.00th=[27657], 00:46:10.379 | 30.00th=[29230], 40.00th=[30540], 50.00th=[31851], 60.00th=[32900], 00:46:10.379 | 70.00th=[33817], 80.00th=[34866], 90.00th=[36439], 95.00th=[38011], 00:46:10.379 | 99.00th=[70779], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:46:10.379 | 99.99th=[77071] 00:46:10.379 bw ( KiB/s): min= 9984, max=14592, per=35.42%, avg=12057.60, stdev=1147.58, samples=20 00:46:10.379 iops : min= 78, max= 114, avg=94.20, stdev= 8.97, samples=20 00:46:10.379 lat (msec) : 20=2.54%, 50=95.66%, 100=1.80% 00:46:10.379 cpu : usr=92.78%, sys=5.80%, ctx=303, majf=0, minf=152 00:46:10.379 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 issued rwts: total=945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.379 filename0: (groupid=0, jobs=1): err= 0: pid=2319471: Mon Dec 9 10:56:53 2024 00:46:10.379 read: IOPS=90, BW=11.3MiB/s (11.9MB/s)(114MiB/10047msec) 00:46:10.379 slat (nsec): min=6042, max=70542, avg=17179.93, stdev=8710.58 00:46:10.379 clat (usec): min=15306, max=76548, avg=33015.37, stdev=5505.98 00:46:10.379 lat (usec): min=15321, max=76564, avg=33032.55, stdev=5507.20 00:46:10.379 clat percentiles (usec): 00:46:10.379 | 1.00th=[17171], 5.00th=[23987], 10.00th=[26608], 20.00th=[28967], 00:46:10.379 | 30.00th=[31327], 40.00th=[32637], 50.00th=[33817], 60.00th=[34866], 00:46:10.379 | 70.00th=[35914], 80.00th=[36439], 90.00th=[38011], 95.00th=[39060], 00:46:10.379 | 99.00th=[41681], 99.50th=[67634], 99.90th=[77071], 99.95th=[77071], 00:46:10.379 | 99.99th=[77071] 00:46:10.379 bw ( KiB/s): min=10496, max=14080, per=34.15%, avg=11623.45, stdev=879.48, samples=20 00:46:10.379 iops : min= 82, max= 110, avg=90.80, stdev= 6.88, samples=20 00:46:10.379 lat (msec) : 20=1.98%, 50=97.48%, 100=0.55% 00:46:10.379 cpu : usr=94.82%, sys=4.70%, ctx=19, majf=0, minf=159 00:46:10.379 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 issued rwts: total=911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.379 filename0: (groupid=0, jobs=1): err= 0: pid=2319472: Mon Dec 9 10:56:53 2024 00:46:10.379 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(102MiB/10048msec) 00:46:10.379 slat (nsec): min=8540, max=31361, avg=15530.36, stdev=2789.91 00:46:10.379 clat (usec): min=15379, max=70606, avg=36821.29, stdev=6193.41 00:46:10.379 lat (usec): min=15397, max=70621, avg=36836.82, stdev=6193.25 00:46:10.379 clat percentiles (usec): 00:46:10.379 | 1.00th=[17171], 5.00th=[26084], 10.00th=[28443], 20.00th=[31851], 00:46:10.379 | 30.00th=[34341], 40.00th=[36439], 50.00th=[38011], 60.00th=[39584], 00:46:10.379 | 70.00th=[40633], 80.00th=[41681], 90.00th=[43254], 95.00th=[44827], 00:46:10.379 | 99.00th=[47449], 99.50th=[48497], 99.90th=[70779], 99.95th=[70779], 00:46:10.379 | 99.99th=[70779] 00:46:10.379 bw ( KiB/s): min= 8448, max=13056, per=30.65%, avg=10432.00, stdev=1010.01, samples=20 00:46:10.379 iops : min= 66, max= 102, avg=81.50, stdev= 7.89, samples=20 00:46:10.379 lat (msec) : 20=1.84%, 50=97.67%, 100=0.49% 00:46:10.379 cpu : usr=94.88%, sys=4.62%, ctx=18, majf=0, minf=88 00:46:10.379 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:10.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:10.379 issued rwts: total=817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:10.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:10.379 00:46:10.379 Run status group 0 (all jobs): 00:46:10.379 READ: bw=33.2MiB/s (34.9MB/s), 10.2MiB/s-11.8MiB/s (10.7MB/s-12.3MB/s), io=334MiB (350MB), run=10047-10052msec 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:10.379 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.380 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.380 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.380 00:46:10.380 real 0m11.733s 00:46:10.380 user 0m30.117s 00:46:10.380 sys 0m1.944s 00:46:10.380 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:10.380 10:56:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:10.380 ************************************ 00:46:10.380 END TEST fio_dif_digest 00:46:10.380 ************************************ 00:46:10.380 10:56:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:10.380 10:56:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:10.380 rmmod nvme_tcp 00:46:10.380 rmmod nvme_fabrics 00:46:10.380 rmmod nvme_keyring 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2313295 ']' 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2313295 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2313295 ']' 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2313295 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2313295 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2313295' 00:46:10.380 killing process with pid 2313295 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2313295 00:46:10.380 10:56:54 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2313295 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:10.380 10:56:54 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:11.758 Waiting for block devices as requested 00:46:11.758 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:11.758 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:12.018 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:12.018 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:12.018 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:12.279 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:12.279 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:12.279 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:12.539 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:12.539 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:12.539 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:12.539 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:12.800 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:12.800 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:12.800 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:13.063 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:13.063 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:13.063 10:56:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:13.063 10:56:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:13.063 10:56:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:15.095 10:56:59 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:15.095 00:46:15.095 real 1m13.421s 00:46:15.095 user 6m37.058s 00:46:15.095 sys 0m20.398s 00:46:15.095 10:56:59 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:15.385 10:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:15.385 ************************************ 00:46:15.385 END TEST nvmf_dif 00:46:15.385 ************************************ 00:46:15.385 10:56:59 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:15.385 10:56:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:15.385 10:56:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:15.385 10:56:59 -- common/autotest_common.sh@10 -- # set +x 00:46:15.385 ************************************ 00:46:15.385 START TEST nvmf_abort_qd_sizes 00:46:15.385 ************************************ 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:15.385 * Looking for test storage... 00:46:15.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:15.385 10:56:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:15.385 --rc genhtml_branch_coverage=1 00:46:15.385 --rc genhtml_function_coverage=1 00:46:15.385 --rc genhtml_legend=1 00:46:15.385 --rc geninfo_all_blocks=1 00:46:15.385 --rc geninfo_unexecuted_blocks=1 00:46:15.385 00:46:15.385 ' 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:15.385 --rc genhtml_branch_coverage=1 00:46:15.385 --rc genhtml_function_coverage=1 00:46:15.385 --rc genhtml_legend=1 00:46:15.385 --rc geninfo_all_blocks=1 00:46:15.385 --rc geninfo_unexecuted_blocks=1 00:46:15.385 00:46:15.385 ' 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:15.385 --rc genhtml_branch_coverage=1 00:46:15.385 --rc genhtml_function_coverage=1 00:46:15.385 --rc genhtml_legend=1 00:46:15.385 --rc geninfo_all_blocks=1 00:46:15.385 --rc geninfo_unexecuted_blocks=1 00:46:15.385 00:46:15.385 ' 00:46:15.385 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:15.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:15.385 --rc genhtml_branch_coverage=1 00:46:15.386 --rc genhtml_function_coverage=1 00:46:15.386 --rc genhtml_legend=1 00:46:15.386 --rc geninfo_all_blocks=1 00:46:15.386 --rc geninfo_unexecuted_blocks=1 00:46:15.386 00:46:15.386 ' 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:15.386 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:15.686 10:57:00 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:15.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:15.687 10:57:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:46:18.428 Found 0000:84:00.0 (0x8086 - 0x159b) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:46:18.428 Found 0000:84:00.1 (0x8086 - 0x159b) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:46:18.428 Found net devices under 0000:84:00.0: cvl_0_0 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:46:18.428 Found net devices under 0000:84:00.1: cvl_0_1 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:18.428 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:18.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:18.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:46:18.429 00:46:18.429 --- 10.0.0.2 ping statistics --- 00:46:18.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:18.429 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:18.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:18.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:46:18.429 00:46:18.429 --- 10.0.0.1 ping statistics --- 00:46:18.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:18.429 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:18.429 10:57:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:19.873 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:19.873 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:19.873 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:20.809 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:46:21.069 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2324555 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2324555 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2324555 ']' 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:21.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:21.070 10:57:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:21.070 [2024-12-09 10:57:05.588569] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:46:21.070 [2024-12-09 10:57:05.588676] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:21.329 [2024-12-09 10:57:05.738659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:21.329 [2024-12-09 10:57:05.865530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:21.329 [2024-12-09 10:57:05.865639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:21.329 [2024-12-09 10:57:05.865675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:21.329 [2024-12-09 10:57:05.865717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:21.329 [2024-12-09 10:57:05.865743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:21.329 [2024-12-09 10:57:05.869001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:21.329 [2024-12-09 10:57:05.869108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:21.329 [2024-12-09 10:57:05.869213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:21.329 [2024-12-09 10:57:05.869217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:21.589 10:57:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:21.589 ************************************ 00:46:21.589 START TEST spdk_target_abort 00:46:21.589 ************************************ 00:46:21.589 10:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:21.589 10:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:21.589 10:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:46:21.589 10:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:21.589 10:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:24.876 spdk_targetn1 00:46:24.876 10:57:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.876 10:57:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:24.876 10:57:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.876 10:57:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:24.876 [2024-12-09 10:57:09.004454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:24.876 [2024-12-09 10:57:09.048869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:24.876 10:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:28.160 Initializing NVMe Controllers 00:46:28.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:28.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:28.160 Initialization complete. Launching workers. 00:46:28.160 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11948, failed: 0 00:46:28.160 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1391, failed to submit 10557 00:46:28.160 success 722, unsuccessful 669, failed 0 00:46:28.160 10:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:28.160 10:57:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:31.449 Initializing NVMe Controllers 00:46:31.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:31.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:31.449 Initialization complete. Launching workers. 00:46:31.449 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8851, failed: 0 00:46:31.449 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1298, failed to submit 7553 00:46:31.449 success 318, unsuccessful 980, failed 0 00:46:31.449 10:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:31.449 10:57:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:34.735 Initializing NVMe Controllers 00:46:34.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:34.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:34.735 Initialization complete. Launching workers. 00:46:34.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31379, failed: 0 00:46:34.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2601, failed to submit 28778 00:46:34.735 success 523, unsuccessful 2078, failed 0 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:34.735 10:57:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2324555 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2324555 ']' 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2324555 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324555 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324555' 00:46:35.668 killing process with pid 2324555 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2324555 00:46:35.668 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2324555 00:46:36.237 00:46:36.237 real 0m14.449s 00:46:36.237 user 0m54.848s 00:46:36.237 sys 0m3.002s 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:36.237 ************************************ 00:46:36.237 END TEST spdk_target_abort 00:46:36.237 ************************************ 00:46:36.237 10:57:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:36.237 10:57:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:36.237 10:57:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:36.237 10:57:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:36.237 ************************************ 00:46:36.237 START TEST kernel_target_abort 00:46:36.237 ************************************ 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:36.237 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:36.238 10:57:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:37.615 Waiting for block devices as requested 00:46:37.875 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:37.875 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:38.133 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:38.133 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:38.133 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:38.392 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:38.393 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:38.393 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:38.393 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:38.651 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:38.651 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:38.651 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:38.909 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:38.909 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:38.909 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:38.909 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:39.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:39.169 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:39.429 No valid GPT data, bailing 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:46:39.429 00:46:39.429 Discovery Log Number of Records 2, Generation counter 2 00:46:39.429 =====Discovery Log Entry 0====== 00:46:39.429 trtype: tcp 00:46:39.429 adrfam: ipv4 00:46:39.429 subtype: current discovery subsystem 00:46:39.429 treq: not specified, sq flow control disable supported 00:46:39.429 portid: 1 00:46:39.429 trsvcid: 4420 00:46:39.429 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:39.429 traddr: 10.0.0.1 00:46:39.429 eflags: none 00:46:39.429 sectype: none 00:46:39.429 =====Discovery Log Entry 1====== 00:46:39.429 trtype: tcp 00:46:39.429 adrfam: ipv4 00:46:39.429 subtype: nvme subsystem 00:46:39.429 treq: not specified, sq flow control disable supported 00:46:39.429 portid: 1 00:46:39.429 trsvcid: 4420 00:46:39.429 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:39.429 traddr: 10.0.0.1 00:46:39.429 eflags: none 00:46:39.429 sectype: none 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:39.429 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:39.430 10:57:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:42.722 Initializing NVMe Controllers 00:46:42.722 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:42.722 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:42.722 Initialization complete. Launching workers. 00:46:42.722 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 19839, failed: 0 00:46:42.722 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19839, failed to submit 0 00:46:42.722 success 0, unsuccessful 19839, failed 0 00:46:42.722 10:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:42.722 10:57:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:46.019 Initializing NVMe Controllers 00:46:46.019 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:46.019 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:46.019 Initialization complete. Launching workers. 00:46:46.019 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35833, failed: 0 00:46:46.019 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8966, failed to submit 26867 00:46:46.019 success 0, unsuccessful 8966, failed 0 00:46:46.019 10:57:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:46.020 10:57:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:49.312 Initializing NVMe Controllers 00:46:49.312 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:49.312 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:49.312 Initialization complete. Launching workers. 00:46:49.312 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33882, failed: 0 00:46:49.312 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8482, failed to submit 25400 00:46:49.312 success 0, unsuccessful 8482, failed 0 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:49.312 10:57:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:50.692 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:50.692 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:50.951 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:46:50.951 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:46:51.889 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:46:52.148 00:46:52.148 real 0m15.877s 00:46:52.148 user 0m7.033s 00:46:52.148 sys 0m4.172s 00:46:52.148 10:57:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:52.148 10:57:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:52.148 ************************************ 00:46:52.148 END TEST kernel_target_abort 00:46:52.148 ************************************ 00:46:52.148 10:57:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:52.148 10:57:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:52.148 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:52.148 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:52.148 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:52.149 rmmod nvme_tcp 00:46:52.149 rmmod nvme_fabrics 00:46:52.149 rmmod nvme_keyring 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2324555 ']' 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2324555 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2324555 ']' 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2324555 00:46:52.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2324555) - No such process 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2324555 is not found' 00:46:52.149 Process with pid 2324555 is not found 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:52.149 10:57:36 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:54.055 Waiting for block devices as requested 00:46:54.055 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:46:54.055 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:54.314 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:54.314 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:54.314 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:54.573 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:54.573 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:54.573 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:54.833 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:54.833 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:46:54.833 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:46:55.093 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:46:55.093 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:46:55.093 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:46:55.352 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:46:55.352 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:46:55.352 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:55.611 10:57:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:57.518 10:57:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:57.518 00:46:57.518 real 0m42.326s 00:46:57.518 user 1m4.853s 00:46:57.518 sys 0m12.265s 00:46:57.518 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:57.518 10:57:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:57.518 ************************************ 00:46:57.518 END TEST nvmf_abort_qd_sizes 00:46:57.518 ************************************ 00:46:57.518 10:57:42 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:57.518 10:57:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:57.518 10:57:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:57.518 10:57:42 -- common/autotest_common.sh@10 -- # set +x 00:46:57.778 ************************************ 00:46:57.778 START TEST keyring_file 00:46:57.778 ************************************ 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:57.778 * Looking for test storage... 00:46:57.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:57.778 10:57:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:57.778 10:57:42 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:57.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:57.778 --rc genhtml_branch_coverage=1 00:46:57.778 --rc genhtml_function_coverage=1 00:46:57.778 --rc genhtml_legend=1 00:46:57.778 --rc geninfo_all_blocks=1 00:46:57.779 --rc geninfo_unexecuted_blocks=1 00:46:57.779 00:46:57.779 ' 00:46:57.779 10:57:42 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:57.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:57.779 --rc genhtml_branch_coverage=1 00:46:57.779 --rc genhtml_function_coverage=1 00:46:57.779 --rc genhtml_legend=1 00:46:57.779 --rc geninfo_all_blocks=1 00:46:57.779 --rc geninfo_unexecuted_blocks=1 00:46:57.779 00:46:57.779 ' 00:46:57.779 10:57:42 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:57.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:57.779 --rc genhtml_branch_coverage=1 00:46:57.779 --rc genhtml_function_coverage=1 00:46:57.779 --rc genhtml_legend=1 00:46:57.779 --rc geninfo_all_blocks=1 00:46:57.779 --rc geninfo_unexecuted_blocks=1 00:46:57.779 00:46:57.779 ' 00:46:57.779 10:57:42 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:57.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:57.779 --rc genhtml_branch_coverage=1 00:46:57.779 --rc genhtml_function_coverage=1 00:46:57.779 --rc genhtml_legend=1 00:46:57.779 --rc geninfo_all_blocks=1 00:46:57.779 --rc geninfo_unexecuted_blocks=1 00:46:57.779 00:46:57.779 ' 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:57.779 10:57:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:57.779 10:57:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:57.779 10:57:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:57.779 10:57:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:57.779 10:57:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:57.779 10:57:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:57.779 10:57:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:57.779 10:57:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:57.779 10:57:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:57.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:57.779 10:57:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2cGEIgCBS7 00:46:57.779 10:57:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:57.779 10:57:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:57.780 10:57:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:57.780 10:57:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2cGEIgCBS7 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2cGEIgCBS7 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2cGEIgCBS7 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HvMlJGDMqf 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:58.038 10:57:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HvMlJGDMqf 00:46:58.038 10:57:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HvMlJGDMqf 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HvMlJGDMqf 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=2331029 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:58.038 10:57:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2331029 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2331029 ']' 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:58.038 10:57:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:58.038 [2024-12-09 10:57:42.610505] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:46:58.038 [2024-12-09 10:57:42.610610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331029 ] 00:46:58.297 [2024-12-09 10:57:42.733081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:58.297 [2024-12-09 10:57:42.851965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:59.233 10:57:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:59.233 [2024-12-09 10:57:43.710065] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:59.233 null0 00:46:59.233 [2024-12-09 10:57:43.742431] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:59.233 [2024-12-09 10:57:43.743276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:59.233 10:57:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:59.233 [2024-12-09 10:57:43.770477] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:59.233 request: 00:46:59.233 { 00:46:59.233 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:59.233 "secure_channel": false, 00:46:59.233 "listen_address": { 00:46:59.233 "trtype": "tcp", 00:46:59.233 "traddr": "127.0.0.1", 00:46:59.233 "trsvcid": "4420" 00:46:59.233 }, 00:46:59.233 "method": "nvmf_subsystem_add_listener", 00:46:59.233 "req_id": 1 00:46:59.233 } 00:46:59.233 Got JSON-RPC error response 00:46:59.233 response: 00:46:59.233 { 00:46:59.233 "code": -32602, 00:46:59.233 "message": "Invalid parameters" 00:46:59.233 } 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:59.233 10:57:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=2331196 00:46:59.233 10:57:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:59.233 10:57:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2331196 /var/tmp/bperf.sock 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2331196 ']' 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:59.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:59.233 10:57:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:59.233 [2024-12-09 10:57:43.827204] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:46:59.233 [2024-12-09 10:57:43.827300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331196 ] 00:46:59.492 [2024-12-09 10:57:43.957741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.493 [2024-12-09 10:57:44.079350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:00.062 10:57:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:00.062 10:57:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:00.062 10:57:44 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:00.062 10:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:00.321 10:57:44 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HvMlJGDMqf 00:47:00.321 10:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HvMlJGDMqf 00:47:00.890 10:57:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:00.890 10:57:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:00.890 10:57:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:00.890 10:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.890 10:57:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:01.460 10:57:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2cGEIgCBS7 == \/\t\m\p\/\t\m\p\.\2\c\G\E\I\g\C\B\S\7 ]] 00:47:01.460 10:57:45 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:01.460 10:57:45 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:01.460 10:57:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:01.460 10:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:01.460 10:57:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:02.030 10:57:46 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.HvMlJGDMqf == \/\t\m\p\/\t\m\p\.\H\v\M\l\J\G\D\M\q\f ]] 00:47:02.030 10:57:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:02.030 10:57:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:02.030 10:57:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:02.030 10:57:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:02.030 10:57:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:02.030 10:57:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.599 10:57:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:02.599 10:57:47 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:02.599 10:57:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:02.599 10:57:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:02.599 10:57:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:02.599 10:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.599 10:57:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:03.168 10:57:47 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:03.168 10:57:47 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:03.168 10:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:03.737 [2024-12-09 10:57:48.161680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:03.737 nvme0n1 00:47:03.737 10:57:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:03.737 10:57:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:03.737 10:57:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:03.737 10:57:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:03.737 10:57:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:03.737 10:57:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:04.307 10:57:48 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:04.307 10:57:48 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:04.307 10:57:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:04.307 10:57:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:04.307 10:57:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:04.307 10:57:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:04.307 10:57:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:05.245 10:57:49 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:05.245 10:57:49 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:05.245 Running I/O for 1 seconds... 00:47:06.181 4125.00 IOPS, 16.11 MiB/s 00:47:06.181 Latency(us) 00:47:06.181 [2024-12-09T09:57:50.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:06.181 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:06.181 nvme0n1 : 1.02 4179.16 16.32 0.00 0.00 30434.44 11699.39 50875.35 00:47:06.181 [2024-12-09T09:57:50.835Z] =================================================================================================================== 00:47:06.181 [2024-12-09T09:57:50.835Z] Total : 4179.16 16.32 0.00 0.00 30434.44 11699.39 50875.35 00:47:06.181 { 00:47:06.181 "results": [ 00:47:06.181 { 00:47:06.181 "job": "nvme0n1", 00:47:06.181 "core_mask": "0x2", 00:47:06.181 "workload": "randrw", 00:47:06.181 "percentage": 50, 00:47:06.181 "status": "finished", 00:47:06.181 "queue_depth": 128, 00:47:06.181 "io_size": 4096, 00:47:06.181 "runtime": 1.017908, 00:47:06.181 "iops": 4179.159609709325, 00:47:06.181 "mibps": 16.32484222542705, 00:47:06.181 "io_failed": 0, 00:47:06.181 "io_timeout": 0, 00:47:06.181 "avg_latency_us": 30434.44184923993, 00:47:06.181 "min_latency_us": 11699.38962962963, 00:47:06.181 "max_latency_us": 50875.35407407407 00:47:06.181 } 00:47:06.181 ], 00:47:06.181 "core_count": 1 00:47:06.181 } 00:47:06.181 10:57:50 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:06.181 10:57:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:06.749 10:57:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:06.749 10:57:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:06.749 10:57:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:06.749 10:57:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:06.749 10:57:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:06.749 10:57:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.317 10:57:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:07.317 10:57:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:07.317 10:57:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:07.317 10:57:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.317 10:57:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.317 10:57:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:07.317 10:57:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.576 10:57:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:07.576 10:57:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.576 10:57:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:07.576 10:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:07.834 [2024-12-09 10:57:52.433685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-12-09 10:57:52.433696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b87b0 (107)ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:07.834 : Transport endpoint is not connected 00:47:07.834 [2024-12-09 10:57:52.434673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b87b0 (9): Bad file descriptor 00:47:07.834 [2024-12-09 10:57:52.435666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:07.834 [2024-12-09 10:57:52.435717] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:07.834 [2024-12-09 10:57:52.435780] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:07.834 [2024-12-09 10:57:52.435818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:07.834 request: 00:47:07.834 { 00:47:07.834 "name": "nvme0", 00:47:07.834 "trtype": "tcp", 00:47:07.834 "traddr": "127.0.0.1", 00:47:07.834 "adrfam": "ipv4", 00:47:07.834 "trsvcid": "4420", 00:47:07.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:07.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:07.834 "prchk_reftag": false, 00:47:07.834 "prchk_guard": false, 00:47:07.834 "hdgst": false, 00:47:07.834 "ddgst": false, 00:47:07.834 "psk": "key1", 00:47:07.834 "allow_unrecognized_csi": false, 00:47:07.834 "method": "bdev_nvme_attach_controller", 00:47:07.834 "req_id": 1 00:47:07.834 } 00:47:07.834 Got JSON-RPC error response 00:47:07.834 response: 00:47:07.834 { 00:47:07.834 "code": -5, 00:47:07.834 "message": "Input/output error" 00:47:07.834 } 00:47:07.834 10:57:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:07.834 10:57:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:07.834 10:57:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:07.834 10:57:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:07.834 10:57:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:07.834 10:57:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:07.834 10:57:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:07.834 10:57:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:07.834 10:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.835 10:57:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:08.402 10:57:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:08.402 10:57:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:08.402 10:57:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:08.402 10:57:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:08.402 10:57:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:08.402 10:57:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:08.402 10:57:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:08.971 10:57:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:08.971 10:57:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:08.971 10:57:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:09.540 10:57:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:09.541 10:57:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:10.109 10:57:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:10.109 10:57:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:10.109 10:57:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:10.678 10:57:55 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:10.678 10:57:55 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2cGEIgCBS7 00:47:10.678 10:57:55 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:10.678 10:57:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:10.678 10:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:11.248 [2024-12-09 10:57:55.784195] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2cGEIgCBS7': 0100660 00:47:11.248 [2024-12-09 10:57:55.784281] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:11.248 request: 00:47:11.248 { 00:47:11.248 "name": "key0", 00:47:11.248 "path": "/tmp/tmp.2cGEIgCBS7", 00:47:11.248 "method": "keyring_file_add_key", 00:47:11.248 "req_id": 1 00:47:11.248 } 00:47:11.248 Got JSON-RPC error response 00:47:11.248 response: 00:47:11.248 { 00:47:11.248 "code": -1, 00:47:11.248 "message": "Operation not permitted" 00:47:11.248 } 00:47:11.248 10:57:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:11.248 10:57:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:11.248 10:57:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:11.248 10:57:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:11.248 10:57:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2cGEIgCBS7 00:47:11.248 10:57:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:11.248 10:57:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2cGEIgCBS7 00:47:11.816 10:57:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2cGEIgCBS7 00:47:11.816 10:57:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:11.816 10:57:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:11.816 10:57:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:11.816 10:57:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:11.816 10:57:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:11.816 10:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:12.076 10:57:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:12.076 10:57:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:12.076 10:57:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:12.076 10:57:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:12.645 [2024-12-09 10:57:57.039868] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2cGEIgCBS7': No such file or directory 00:47:12.645 [2024-12-09 10:57:57.039912] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:12.645 [2024-12-09 10:57:57.039940] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:12.645 [2024-12-09 10:57:57.039956] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:12.645 [2024-12-09 10:57:57.039972] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:12.645 [2024-12-09 10:57:57.040017] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:12.645 request: 00:47:12.645 { 00:47:12.645 "name": "nvme0", 00:47:12.645 "trtype": "tcp", 00:47:12.645 "traddr": "127.0.0.1", 00:47:12.645 "adrfam": "ipv4", 00:47:12.645 "trsvcid": "4420", 00:47:12.645 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:12.645 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:12.645 "prchk_reftag": false, 00:47:12.645 "prchk_guard": false, 00:47:12.645 "hdgst": false, 00:47:12.645 "ddgst": false, 00:47:12.645 "psk": "key0", 00:47:12.645 "allow_unrecognized_csi": false, 00:47:12.645 "method": "bdev_nvme_attach_controller", 00:47:12.645 "req_id": 1 00:47:12.645 } 00:47:12.645 Got JSON-RPC error response 00:47:12.645 response: 00:47:12.645 { 00:47:12.645 "code": -19, 00:47:12.645 "message": "No such device" 00:47:12.645 } 00:47:12.645 10:57:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:12.645 10:57:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:12.645 10:57:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:12.645 10:57:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:12.645 10:57:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:12.645 10:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:13.216 10:57:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y7vqgxJWDJ 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:13.217 10:57:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y7vqgxJWDJ 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y7vqgxJWDJ 00:47:13.217 10:57:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Y7vqgxJWDJ 00:47:13.217 10:57:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y7vqgxJWDJ 00:47:13.217 10:57:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y7vqgxJWDJ 00:47:13.786 10:57:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:13.786 10:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:14.352 nvme0n1 00:47:14.352 10:57:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:14.352 10:57:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:14.352 10:57:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:14.352 10:57:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:14.353 10:57:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.353 10:57:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:14.919 10:57:59 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:14.919 10:57:59 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:14.919 10:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:15.177 10:57:59 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:15.177 10:57:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.177 10:57:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.177 10:57:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:15.177 10:57:59 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:15.436 10:58:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:15.436 10:58:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:15.436 10:58:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:15.436 10:58:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:15.436 10:58:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:15.436 10:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:15.436 10:58:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:16.004 10:58:00 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:16.004 10:58:00 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:16.004 10:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:16.263 10:58:00 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:16.263 10:58:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:16.263 10:58:00 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:16.523 10:58:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:16.524 10:58:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y7vqgxJWDJ 00:47:16.524 10:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y7vqgxJWDJ 00:47:16.782 10:58:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HvMlJGDMqf 00:47:16.782 10:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HvMlJGDMqf 00:47:17.349 10:58:01 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.349 10:58:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:17.608 nvme0n1 00:47:17.608 10:58:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:17.608 10:58:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:18.546 10:58:02 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:18.546 "subsystems": [ 00:47:18.546 { 00:47:18.546 "subsystem": "keyring", 00:47:18.546 "config": [ 00:47:18.546 { 00:47:18.546 "method": "keyring_file_add_key", 00:47:18.546 "params": { 00:47:18.546 "name": "key0", 00:47:18.546 "path": "/tmp/tmp.Y7vqgxJWDJ" 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "keyring_file_add_key", 00:47:18.546 "params": { 00:47:18.546 "name": "key1", 00:47:18.546 "path": "/tmp/tmp.HvMlJGDMqf" 00:47:18.546 } 00:47:18.546 } 00:47:18.546 ] 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "subsystem": "iobuf", 00:47:18.546 "config": [ 00:47:18.546 { 00:47:18.546 "method": "iobuf_set_options", 00:47:18.546 "params": { 00:47:18.546 "small_pool_count": 8192, 00:47:18.546 "large_pool_count": 1024, 00:47:18.546 "small_bufsize": 8192, 00:47:18.546 "large_bufsize": 135168, 00:47:18.546 "enable_numa": false 00:47:18.546 } 00:47:18.546 } 00:47:18.546 ] 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "subsystem": "sock", 00:47:18.546 "config": [ 00:47:18.546 { 00:47:18.546 "method": "sock_set_default_impl", 00:47:18.546 "params": { 00:47:18.546 "impl_name": "posix" 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "sock_impl_set_options", 00:47:18.546 "params": { 00:47:18.546 "impl_name": "ssl", 00:47:18.546 "recv_buf_size": 4096, 00:47:18.546 "send_buf_size": 4096, 00:47:18.546 "enable_recv_pipe": true, 00:47:18.546 "enable_quickack": false, 00:47:18.546 "enable_placement_id": 0, 00:47:18.546 "enable_zerocopy_send_server": true, 00:47:18.546 "enable_zerocopy_send_client": false, 00:47:18.546 "zerocopy_threshold": 0, 00:47:18.546 "tls_version": 0, 00:47:18.546 "enable_ktls": false 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "sock_impl_set_options", 00:47:18.546 "params": { 00:47:18.546 "impl_name": "posix", 00:47:18.546 "recv_buf_size": 2097152, 00:47:18.546 "send_buf_size": 2097152, 00:47:18.546 "enable_recv_pipe": true, 00:47:18.546 "enable_quickack": false, 00:47:18.546 "enable_placement_id": 0, 00:47:18.546 "enable_zerocopy_send_server": true, 00:47:18.546 "enable_zerocopy_send_client": false, 00:47:18.546 "zerocopy_threshold": 0, 00:47:18.546 "tls_version": 0, 00:47:18.546 "enable_ktls": false 00:47:18.546 } 00:47:18.546 } 00:47:18.546 ] 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "subsystem": "vmd", 00:47:18.546 "config": [] 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "subsystem": "accel", 00:47:18.546 "config": [ 00:47:18.546 { 00:47:18.546 "method": "accel_set_options", 00:47:18.546 "params": { 00:47:18.546 "small_cache_size": 128, 00:47:18.546 "large_cache_size": 16, 00:47:18.546 "task_count": 2048, 00:47:18.546 "sequence_count": 2048, 00:47:18.546 "buf_count": 2048 00:47:18.546 } 00:47:18.546 } 00:47:18.546 ] 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "subsystem": "bdev", 00:47:18.546 "config": [ 00:47:18.546 { 00:47:18.546 "method": "bdev_set_options", 00:47:18.546 "params": { 00:47:18.546 "bdev_io_pool_size": 65535, 00:47:18.546 "bdev_io_cache_size": 256, 00:47:18.546 "bdev_auto_examine": true, 00:47:18.546 "iobuf_small_cache_size": 128, 00:47:18.546 "iobuf_large_cache_size": 16 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "bdev_raid_set_options", 00:47:18.546 "params": { 00:47:18.546 "process_window_size_kb": 1024, 00:47:18.546 "process_max_bandwidth_mb_sec": 0 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "bdev_iscsi_set_options", 00:47:18.546 "params": { 00:47:18.546 "timeout_sec": 30 00:47:18.546 } 00:47:18.546 }, 00:47:18.546 { 00:47:18.546 "method": "bdev_nvme_set_options", 00:47:18.546 "params": { 00:47:18.546 "action_on_timeout": "none", 00:47:18.546 "timeout_us": 0, 00:47:18.546 "timeout_admin_us": 0, 00:47:18.546 "keep_alive_timeout_ms": 10000, 00:47:18.546 "arbitration_burst": 0, 00:47:18.546 "low_priority_weight": 0, 00:47:18.546 "medium_priority_weight": 0, 00:47:18.546 "high_priority_weight": 0, 00:47:18.546 "nvme_adminq_poll_period_us": 10000, 00:47:18.546 "nvme_ioq_poll_period_us": 0, 00:47:18.546 "io_queue_requests": 512, 00:47:18.546 "delay_cmd_submit": true, 00:47:18.546 "transport_retry_count": 4, 00:47:18.546 "bdev_retry_count": 3, 00:47:18.546 "transport_ack_timeout": 0, 00:47:18.546 "ctrlr_loss_timeout_sec": 0, 00:47:18.546 "reconnect_delay_sec": 0, 00:47:18.546 "fast_io_fail_timeout_sec": 0, 00:47:18.546 "disable_auto_failback": false, 00:47:18.546 "generate_uuids": false, 00:47:18.546 "transport_tos": 0, 00:47:18.546 "nvme_error_stat": false, 00:47:18.546 "rdma_srq_size": 0, 00:47:18.546 "io_path_stat": false, 00:47:18.546 "allow_accel_sequence": false, 00:47:18.546 "rdma_max_cq_size": 0, 00:47:18.546 "rdma_cm_event_timeout_ms": 0, 00:47:18.546 "dhchap_digests": [ 00:47:18.546 "sha256", 00:47:18.546 "sha384", 00:47:18.546 "sha512" 00:47:18.547 ], 00:47:18.547 "dhchap_dhgroups": [ 00:47:18.547 "null", 00:47:18.547 "ffdhe2048", 00:47:18.547 "ffdhe3072", 00:47:18.547 "ffdhe4096", 00:47:18.547 "ffdhe6144", 00:47:18.547 "ffdhe8192" 00:47:18.547 ] 00:47:18.547 } 00:47:18.547 }, 00:47:18.547 { 00:47:18.547 "method": "bdev_nvme_attach_controller", 00:47:18.547 "params": { 00:47:18.547 "name": "nvme0", 00:47:18.547 "trtype": "TCP", 00:47:18.547 "adrfam": "IPv4", 00:47:18.547 "traddr": "127.0.0.1", 00:47:18.547 "trsvcid": "4420", 00:47:18.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:18.547 "prchk_reftag": false, 00:47:18.547 "prchk_guard": false, 00:47:18.547 "ctrlr_loss_timeout_sec": 0, 00:47:18.547 "reconnect_delay_sec": 0, 00:47:18.547 "fast_io_fail_timeout_sec": 0, 00:47:18.547 "psk": "key0", 00:47:18.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:18.547 "hdgst": false, 00:47:18.547 "ddgst": false, 00:47:18.547 "multipath": "multipath" 00:47:18.547 } 00:47:18.547 }, 00:47:18.547 { 00:47:18.547 "method": "bdev_nvme_set_hotplug", 00:47:18.547 "params": { 00:47:18.547 "period_us": 100000, 00:47:18.547 "enable": false 00:47:18.547 } 00:47:18.547 }, 00:47:18.547 { 00:47:18.547 "method": "bdev_wait_for_examine" 00:47:18.547 } 00:47:18.547 ] 00:47:18.547 }, 00:47:18.547 { 00:47:18.547 "subsystem": "nbd", 00:47:18.547 "config": [] 00:47:18.547 } 00:47:18.547 ] 00:47:18.547 }' 00:47:18.547 10:58:02 keyring_file -- keyring/file.sh@115 -- # killprocess 2331196 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2331196 ']' 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2331196 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331196 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331196' 00:47:18.547 killing process with pid 2331196 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@973 -- # kill 2331196 00:47:18.547 Received shutdown signal, test time was about 1.000000 seconds 00:47:18.547 00:47:18.547 Latency(us) 00:47:18.547 [2024-12-09T09:58:03.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:18.547 [2024-12-09T09:58:03.201Z] =================================================================================================================== 00:47:18.547 [2024-12-09T09:58:03.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:18.547 10:58:02 keyring_file -- common/autotest_common.sh@978 -- # wait 2331196 00:47:18.806 10:58:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=2333500 00:47:18.806 10:58:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2333500 /var/tmp/bperf.sock 00:47:18.806 10:58:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2333500 ']' 00:47:18.806 10:58:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:18.806 10:58:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:18.806 10:58:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:18.806 10:58:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:18.806 "subsystems": [ 00:47:18.806 { 00:47:18.806 "subsystem": "keyring", 00:47:18.806 "config": [ 00:47:18.806 { 00:47:18.806 "method": "keyring_file_add_key", 00:47:18.806 "params": { 00:47:18.806 "name": "key0", 00:47:18.806 "path": "/tmp/tmp.Y7vqgxJWDJ" 00:47:18.806 } 00:47:18.806 }, 00:47:18.806 { 00:47:18.806 "method": "keyring_file_add_key", 00:47:18.806 "params": { 00:47:18.806 "name": "key1", 00:47:18.807 "path": "/tmp/tmp.HvMlJGDMqf" 00:47:18.807 } 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "iobuf", 00:47:18.807 "config": [ 00:47:18.807 { 00:47:18.807 "method": "iobuf_set_options", 00:47:18.807 "params": { 00:47:18.807 "small_pool_count": 8192, 00:47:18.807 "large_pool_count": 1024, 00:47:18.807 "small_bufsize": 8192, 00:47:18.807 "large_bufsize": 135168, 00:47:18.807 "enable_numa": false 00:47:18.807 } 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "sock", 00:47:18.807 "config": [ 00:47:18.807 { 00:47:18.807 "method": "sock_set_default_impl", 00:47:18.807 "params": { 00:47:18.807 "impl_name": "posix" 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "sock_impl_set_options", 00:47:18.807 "params": { 00:47:18.807 "impl_name": "ssl", 00:47:18.807 "recv_buf_size": 4096, 00:47:18.807 "send_buf_size": 4096, 00:47:18.807 "enable_recv_pipe": true, 00:47:18.807 "enable_quickack": false, 00:47:18.807 "enable_placement_id": 0, 00:47:18.807 "enable_zerocopy_send_server": true, 00:47:18.807 "enable_zerocopy_send_client": false, 00:47:18.807 "zerocopy_threshold": 0, 00:47:18.807 "tls_version": 0, 00:47:18.807 "enable_ktls": false 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "sock_impl_set_options", 00:47:18.807 "params": { 00:47:18.807 "impl_name": "posix", 00:47:18.807 "recv_buf_size": 2097152, 00:47:18.807 "send_buf_size": 2097152, 00:47:18.807 "enable_recv_pipe": true, 00:47:18.807 "enable_quickack": false, 00:47:18.807 "enable_placement_id": 0, 00:47:18.807 "enable_zerocopy_send_server": true, 00:47:18.807 "enable_zerocopy_send_client": false, 00:47:18.807 "zerocopy_threshold": 0, 00:47:18.807 "tls_version": 0, 00:47:18.807 "enable_ktls": false 00:47:18.807 } 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "vmd", 00:47:18.807 "config": [] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "accel", 00:47:18.807 "config": [ 00:47:18.807 { 00:47:18.807 "method": "accel_set_options", 00:47:18.807 "params": { 00:47:18.807 "small_cache_size": 128, 00:47:18.807 "large_cache_size": 16, 00:47:18.807 "task_count": 2048, 00:47:18.807 "sequence_count": 2048, 00:47:18.807 "buf_count": 2048 00:47:18.807 } 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "bdev", 00:47:18.807 "config": [ 00:47:18.807 { 00:47:18.807 "method": "bdev_set_options", 00:47:18.807 "params": { 00:47:18.807 "bdev_io_pool_size": 65535, 00:47:18.807 "bdev_io_cache_size": 256, 00:47:18.807 "bdev_auto_examine": true, 00:47:18.807 "iobuf_small_cache_size": 128, 00:47:18.807 "iobuf_large_cache_size": 16 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_raid_set_options", 00:47:18.807 "params": { 00:47:18.807 "process_window_size_kb": 1024, 00:47:18.807 "process_max_bandwidth_mb_sec": 0 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_iscsi_set_options", 00:47:18.807 "params": { 00:47:18.807 "timeout_sec": 30 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_nvme_set_options", 00:47:18.807 "params": { 00:47:18.807 "action_on_timeout": "none", 00:47:18.807 "timeout_us": 0, 00:47:18.807 "timeout_admin_us": 0, 00:47:18.807 "keep_alive_timeout_ms": 10000, 00:47:18.807 "arbitration_burst": 0, 00:47:18.807 "low_priority_weight": 0, 00:47:18.807 "medium_priority_weight": 0, 00:47:18.807 "high_priority_weight": 0, 00:47:18.807 "nvme_adminq_poll_period_us": 10000, 00:47:18.807 "nvme_ioq_poll_period_us": 0, 00:47:18.807 "io_queue_requests": 512, 00:47:18.807 "delay_cmd_submit": true, 00:47:18.807 "transport_retry_count": 4, 00:47:18.807 "bdev_retry_count": 3, 00:47:18.807 "transport_ack_timeout": 0, 00:47:18.807 "ctrlr_loss_timeout_sec": 0, 00:47:18.807 "reconnect_delay_sec": 0, 00:47:18.807 "fast_io_fail_timeout_sec": 0, 00:47:18.807 "disable_auto_failback": false, 00:47:18.807 "generate_uuids": false, 00:47:18.807 "transport_tos": 0, 00:47:18.807 "nvme_error_stat": false, 00:47:18.807 "rdma_srq_size": 0, 00:47:18.807 10:58:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:18.807 "io_path_stat": false, 00:47:18.807 "allow_accel_sequence": false, 00:47:18.807 "rdma_max_cq_size": 0, 00:47:18.807 "rdma_cm_event_timeout_ms": 0, 00:47:18.807 "dhchap_digests": [ 00:47:18.807 "sha256", 00:47:18.807 "sha384", 00:47:18.807 "sha512" 00:47:18.807 ], 00:47:18.807 "dhchap_dhgroups": [ 00:47:18.807 "null", 00:47:18.807 "ffdhe2048", 00:47:18.807 "ffdhe3072", 00:47:18.807 "ffdhe4096", 00:47:18.807 "ffdhe6144", 00:47:18.807 "ffdhe8192" 00:47:18.807 ] 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_nvme_attach_controller", 00:47:18.807 "params": { 00:47:18.807 "name": "nvme0", 00:47:18.807 "trtype": "TCP", 00:47:18.807 "adrfam": "IPv4", 00:47:18.807 "traddr": "127.0.0.1", 00:47:18.807 "trsvcid": "4420", 00:47:18.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:18.807 "prchk_reftag": false, 00:47:18.807 "prchk_guard": false, 00:47:18.807 "ctrlr_loss_timeout_sec": 0, 00:47:18.807 "reconnect_delay_sec": 0, 00:47:18.807 "fast_io_fail_timeout_sec": 0, 00:47:18.807 "psk": "key0", 00:47:18.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:18.807 "hdgst": false, 00:47:18.807 "ddgst": false, 00:47:18.807 "multipath": "multipath" 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_nvme_set_hotplug", 00:47:18.807 "params": { 00:47:18.807 "period_us": 100000, 00:47:18.807 "enable": false 00:47:18.807 } 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "method": "bdev_wait_for_examine" 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }, 00:47:18.807 { 00:47:18.807 "subsystem": "nbd", 00:47:18.807 "config": [] 00:47:18.807 } 00:47:18.807 ] 00:47:18.807 }' 00:47:18.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:18.807 10:58:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:18.808 10:58:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:18.808 [2024-12-09 10:58:03.434898] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:47:18.808 [2024-12-09 10:58:03.435005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333500 ] 00:47:19.067 [2024-12-09 10:58:03.620394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:19.326 [2024-12-09 10:58:03.781679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:19.585 [2024-12-09 10:58:04.089359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:19.843 10:58:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:19.843 10:58:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:19.843 10:58:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:19.843 10:58:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:19.843 10:58:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:20.780 10:58:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:20.780 10:58:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:20.780 10:58:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:20.780 10:58:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:20.780 10:58:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:20.780 10:58:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:20.780 10:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.039 10:58:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:21.039 10:58:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:21.039 10:58:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:21.039 10:58:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:21.039 10:58:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:21.039 10:58:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:21.039 10:58:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:21.607 10:58:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:21.607 10:58:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:21.607 10:58:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:21.607 10:58:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:22.176 10:58:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:22.176 10:58:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:22.176 10:58:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Y7vqgxJWDJ /tmp/tmp.HvMlJGDMqf 00:47:22.176 10:58:06 keyring_file -- keyring/file.sh@20 -- # killprocess 2333500 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2333500 ']' 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2333500 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333500 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333500' 00:47:22.176 killing process with pid 2333500 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@973 -- # kill 2333500 00:47:22.176 Received shutdown signal, test time was about 1.000000 seconds 00:47:22.176 00:47:22.176 Latency(us) 00:47:22.176 [2024-12-09T09:58:06.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:22.176 [2024-12-09T09:58:06.830Z] =================================================================================================================== 00:47:22.176 [2024-12-09T09:58:06.830Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:22.176 10:58:06 keyring_file -- common/autotest_common.sh@978 -- # wait 2333500 00:47:22.437 10:58:06 keyring_file -- keyring/file.sh@21 -- # killprocess 2331029 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2331029 ']' 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2331029 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331029 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331029' 00:47:22.437 killing process with pid 2331029 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@973 -- # kill 2331029 00:47:22.437 10:58:06 keyring_file -- common/autotest_common.sh@978 -- # wait 2331029 00:47:23.376 00:47:23.376 real 0m25.471s 00:47:23.376 user 1m6.201s 00:47:23.376 sys 0m5.096s 00:47:23.376 10:58:07 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:23.376 10:58:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:23.376 ************************************ 00:47:23.376 END TEST keyring_file 00:47:23.376 ************************************ 00:47:23.376 10:58:07 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:23.376 10:58:07 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:23.376 10:58:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:23.376 10:58:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:23.376 10:58:07 -- common/autotest_common.sh@10 -- # set +x 00:47:23.376 ************************************ 00:47:23.376 START TEST keyring_linux 00:47:23.376 ************************************ 00:47:23.376 10:58:07 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:23.376 Joined session keyring: 290400292 00:47:23.376 * Looking for test storage... 00:47:23.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:23.376 10:58:07 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:23.376 10:58:07 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:23.376 10:58:07 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:23.376 10:58:08 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:23.376 10:58:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:23.376 10:58:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:23.376 10:58:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:23.376 10:58:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:23.376 10:58:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:23.637 10:58:08 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:23.637 10:58:08 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.637 --rc genhtml_branch_coverage=1 00:47:23.637 --rc genhtml_function_coverage=1 00:47:23.637 --rc genhtml_legend=1 00:47:23.637 --rc geninfo_all_blocks=1 00:47:23.637 --rc geninfo_unexecuted_blocks=1 00:47:23.637 00:47:23.637 ' 00:47:23.637 10:58:08 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.637 --rc genhtml_branch_coverage=1 00:47:23.637 --rc genhtml_function_coverage=1 00:47:23.637 --rc genhtml_legend=1 00:47:23.637 --rc geninfo_all_blocks=1 00:47:23.637 --rc geninfo_unexecuted_blocks=1 00:47:23.637 00:47:23.637 ' 00:47:23.637 10:58:08 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.637 --rc genhtml_branch_coverage=1 00:47:23.637 --rc genhtml_function_coverage=1 00:47:23.637 --rc genhtml_legend=1 00:47:23.637 --rc geninfo_all_blocks=1 00:47:23.637 --rc geninfo_unexecuted_blocks=1 00:47:23.637 00:47:23.637 ' 00:47:23.637 10:58:08 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:23.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.637 --rc genhtml_branch_coverage=1 00:47:23.637 --rc genhtml_function_coverage=1 00:47:23.637 --rc genhtml_legend=1 00:47:23.637 --rc geninfo_all_blocks=1 00:47:23.637 --rc geninfo_unexecuted_blocks=1 00:47:23.637 00:47:23.637 ' 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:23.637 10:58:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:23.637 10:58:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.637 10:58:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.637 10:58:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.637 10:58:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:23.637 10:58:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:23.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:23.637 10:58:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:23.637 /tmp/:spdk-test:key0 00:47:23.637 10:58:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:23.637 10:58:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:23.638 10:58:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:23.638 10:58:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:23.638 10:58:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:23.638 10:58:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:23.638 10:58:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:23.638 10:58:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:23.638 /tmp/:spdk-test:key1 00:47:23.638 10:58:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2334123 00:47:23.638 10:58:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:23.638 10:58:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2334123 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2334123 ']' 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:23.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:23.638 10:58:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:23.896 [2024-12-09 10:58:08.409565] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:47:23.896 [2024-12-09 10:58:08.409770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334123 ] 00:47:24.155 [2024-12-09 10:58:08.578577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:24.155 [2024-12-09 10:58:08.697645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:25.536 [2024-12-09 10:58:09.819659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:25.536 null0 00:47:25.536 [2024-12-09 10:58:09.853204] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:25.536 [2024-12-09 10:58:09.854198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:25.536 994910 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:25.536 786791637 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2334262 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:25.536 10:58:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2334262 /var/tmp/bperf.sock 00:47:25.536 10:58:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2334262 ']' 00:47:25.537 10:58:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:25.537 10:58:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:25.537 10:58:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:25.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:25.537 10:58:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:25.537 10:58:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:25.537 [2024-12-09 10:58:09.977502] Starting SPDK v25.01-pre git sha1 b7d7c4b24 / DPDK 24.03.0 initialization... 00:47:25.537 [2024-12-09 10:58:09.977665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334262 ] 00:47:25.537 [2024-12-09 10:58:10.131907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.795 [2024-12-09 10:58:10.245898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:25.795 10:58:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:25.795 10:58:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:25.795 10:58:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:25.795 10:58:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:26.361 10:58:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:26.361 10:58:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:26.931 10:58:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:26.931 10:58:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:27.502 [2024-12-09 10:58:11.910949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:27.502 nvme0n1 00:47:27.502 10:58:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:27.502 10:58:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:27.502 10:58:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:27.502 10:58:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:27.502 10:58:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:27.502 10:58:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:28.072 10:58:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:28.072 10:58:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:28.072 10:58:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:28.072 10:58:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:28.072 10:58:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:28.072 10:58:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:28.072 10:58:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@25 -- # sn=994910 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 994910 == \9\9\4\9\1\0 ]] 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 994910 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:29.013 10:58:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:29.013 Running I/O for 1 seconds... 00:47:30.396 4510.00 IOPS, 17.62 MiB/s 00:47:30.396 Latency(us) 00:47:30.396 [2024-12-09T09:58:15.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:30.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:30.396 nvme0n1 : 1.03 4512.60 17.63 0.00 0.00 27946.15 7718.68 36700.16 00:47:30.396 [2024-12-09T09:58:15.050Z] =================================================================================================================== 00:47:30.396 [2024-12-09T09:58:15.050Z] Total : 4512.60 17.63 0.00 0.00 27946.15 7718.68 36700.16 00:47:30.396 { 00:47:30.396 "results": [ 00:47:30.396 { 00:47:30.396 "job": "nvme0n1", 00:47:30.397 "core_mask": "0x2", 00:47:30.397 "workload": "randread", 00:47:30.397 "status": "finished", 00:47:30.397 "queue_depth": 128, 00:47:30.397 "io_size": 4096, 00:47:30.397 "runtime": 1.028011, 00:47:30.397 "iops": 4512.597627846395, 00:47:30.397 "mibps": 17.62733448377498, 00:47:30.397 "io_failed": 0, 00:47:30.397 "io_timeout": 0, 00:47:30.397 "avg_latency_us": 27946.14666698602, 00:47:30.397 "min_latency_us": 7718.684444444444, 00:47:30.397 "max_latency_us": 36700.16 00:47:30.397 } 00:47:30.397 ], 00:47:30.397 "core_count": 1 00:47:30.397 } 00:47:30.397 10:58:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:30.397 10:58:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:30.969 10:58:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:30.969 10:58:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:30.969 10:58:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:30.969 10:58:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:30.969 10:58:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:30.969 10:58:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:31.539 10:58:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:31.539 10:58:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:31.539 10:58:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:31.539 10:58:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:31.539 10:58:16 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:31.539 10:58:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:31.799 [2024-12-09 10:58:16.365894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:31.799 [2024-12-09 10:58:16.366221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251b560 (107): Transport endpoint is not connected 00:47:31.799 [2024-12-09 10:58:16.367201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251b560 (9): Bad file descriptor 00:47:31.799 [2024-12-09 10:58:16.368195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:31.799 [2024-12-09 10:58:16.368244] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:31.799 [2024-12-09 10:58:16.368278] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:31.799 [2024-12-09 10:58:16.368314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:31.799 request: 00:47:31.799 { 00:47:31.799 "name": "nvme0", 00:47:31.799 "trtype": "tcp", 00:47:31.799 "traddr": "127.0.0.1", 00:47:31.799 "adrfam": "ipv4", 00:47:31.799 "trsvcid": "4420", 00:47:31.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:31.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:31.799 "prchk_reftag": false, 00:47:31.799 "prchk_guard": false, 00:47:31.799 "hdgst": false, 00:47:31.799 "ddgst": false, 00:47:31.799 "psk": ":spdk-test:key1", 00:47:31.799 "allow_unrecognized_csi": false, 00:47:31.799 "method": "bdev_nvme_attach_controller", 00:47:31.799 "req_id": 1 00:47:31.799 } 00:47:31.799 Got JSON-RPC error response 00:47:31.799 response: 00:47:31.799 { 00:47:31.799 "code": -5, 00:47:31.799 "message": "Input/output error" 00:47:31.799 } 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@33 -- # sn=994910 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 994910 00:47:31.799 1 links removed 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@33 -- # sn=786791637 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 786791637 00:47:31.799 1 links removed 00:47:31.799 10:58:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2334262 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2334262 ']' 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2334262 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:31.799 10:58:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334262 00:47:32.060 10:58:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:32.060 10:58:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:32.060 10:58:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334262' 00:47:32.060 killing process with pid 2334262 00:47:32.060 10:58:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 2334262 00:47:32.060 Received shutdown signal, test time was about 1.000000 seconds 00:47:32.060 00:47:32.060 Latency(us) 00:47:32.060 [2024-12-09T09:58:16.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:32.060 [2024-12-09T09:58:16.714Z] =================================================================================================================== 00:47:32.060 [2024-12-09T09:58:16.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:32.060 10:58:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 2334262 00:47:32.320 10:58:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2334123 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2334123 ']' 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2334123 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334123 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334123' 00:47:32.320 killing process with pid 2334123 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 2334123 00:47:32.320 10:58:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 2334123 00:47:32.889 00:47:32.889 real 0m9.735s 00:47:32.889 user 0m20.242s 00:47:32.889 sys 0m2.604s 00:47:32.889 10:58:17 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:32.889 10:58:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:32.889 ************************************ 00:47:32.889 END TEST keyring_linux 00:47:32.889 ************************************ 00:47:32.889 10:58:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:32.889 10:58:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:32.889 10:58:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:32.889 10:58:17 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:32.889 10:58:17 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:32.889 10:58:17 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:32.889 10:58:17 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:32.889 10:58:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:32.889 10:58:17 -- common/autotest_common.sh@10 -- # set +x 00:47:32.889 10:58:17 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:32.889 10:58:17 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:32.889 10:58:17 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:32.889 10:58:17 -- common/autotest_common.sh@10 -- # set +x 00:47:36.181 INFO: APP EXITING 00:47:36.181 INFO: killing all VMs 00:47:36.181 INFO: killing vhost app 00:47:36.181 INFO: EXIT DONE 00:47:37.558 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:47:37.558 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:47:37.558 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:47:37.558 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:47:37.558 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:47:37.558 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:47:37.558 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:47:37.558 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:47:37.558 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:47:37.558 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:47:37.558 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:47:37.558 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:47:37.558 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:47:37.558 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:47:37.816 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:47:37.816 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:47:37.816 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:47:39.725 Cleaning 00:47:39.725 Removing: /var/run/dpdk/spdk0/config 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:39.725 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:39.725 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:39.725 Removing: /var/run/dpdk/spdk1/config 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:39.725 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:39.725 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:39.725 Removing: /var/run/dpdk/spdk2/config 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:39.725 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:39.725 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:39.725 Removing: /var/run/dpdk/spdk3/config 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:39.725 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:39.725 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:39.725 Removing: /var/run/dpdk/spdk4/config 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:39.725 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:39.725 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:39.725 Removing: /dev/shm/bdev_svc_trace.1 00:47:39.725 Removing: /dev/shm/nvmf_trace.0 00:47:39.985 Removing: /dev/shm/spdk_tgt_trace.pid1961267 00:47:39.985 Removing: /var/run/dpdk/spdk0 00:47:39.985 Removing: /var/run/dpdk/spdk1 00:47:39.985 Removing: /var/run/dpdk/spdk2 00:47:39.985 Removing: /var/run/dpdk/spdk3 00:47:39.985 Removing: /var/run/dpdk/spdk4 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1959316 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1960222 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1961267 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1961847 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1962538 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1962792 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1963529 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1963549 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1963925 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1965604 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1967188 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1967524 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1967977 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1968319 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1968641 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1968809 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1968962 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1969275 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1969598 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1972785 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1973043 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1973342 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1973474 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1974042 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1974172 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1974731 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1974750 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1975047 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1975177 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1975464 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1975483 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1976055 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1976264 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1976471 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1978995 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1982038 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1989268 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1989700 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1992252 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1992534 00:47:39.985 Removing: /var/run/dpdk/spdk_pid1995573 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2000337 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2003309 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2010685 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2016315 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2017516 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2018189 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2029492 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2032438 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2060460 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2063900 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2069275 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2074470 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2074472 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2075013 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2075666 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2076265 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2076595 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2076722 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2076865 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2077002 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2077015 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2077661 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2078312 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2078858 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2079251 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2079302 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2079514 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2080789 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2081648 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2086998 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2133407 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2137487 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2138659 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2139979 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2140130 00:47:39.985 Removing: /var/run/dpdk/spdk_pid2140394 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2140532 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2141237 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2142675 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2144193 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2144884 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2146718 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2147303 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2147873 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2150547 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2154225 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2154226 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2154227 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2156720 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2161732 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2164490 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2169025 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2169981 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2171066 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2172149 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2175054 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2177923 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2180432 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2184957 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2184959 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2188011 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2188147 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2188404 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2188854 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2188860 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2191793 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2192246 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2195200 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2197162 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2201748 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2205458 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2213652 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2218271 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2218281 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2234501 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2235151 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2235588 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2236104 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2236931 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2237598 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2238136 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2238547 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2241335 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2241589 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2245410 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2245583 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2249161 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2252036 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2259512 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2260016 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2263174 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2263336 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2266359 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2270594 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2273820 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2281388 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2287020 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2288201 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2288860 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2300599 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2302983 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2304978 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2310284 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2310299 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2313467 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2314859 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2316260 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2317008 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2318415 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2319291 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2325440 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2325819 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2326149 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2327745 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2328113 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2328507 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2331029 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2331196 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2333500 00:47:40.245 Removing: /var/run/dpdk/spdk_pid2334123 00:47:40.505 Removing: /var/run/dpdk/spdk_pid2334262 00:47:40.505 Clean 00:47:40.505 10:58:25 -- common/autotest_common.sh@1453 -- # return 0 00:47:40.506 10:58:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:40.506 10:58:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:40.506 10:58:25 -- common/autotest_common.sh@10 -- # set +x 00:47:40.506 10:58:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:40.506 10:58:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:40.506 10:58:25 -- common/autotest_common.sh@10 -- # set +x 00:47:40.506 10:58:25 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:40.506 10:58:25 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:40.506 10:58:25 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:40.506 10:58:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:40.506 10:58:25 -- spdk/autotest.sh@398 -- # hostname 00:47:40.506 10:58:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:41.073 geninfo: WARNING: invalid characters removed from testname! 00:49:17.573 10:59:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:17.573 11:00:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:25.712 11:00:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:33.848 11:00:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:41.986 11:00:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:49:50.124 11:00:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:50:00.134 11:00:43 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:50:00.134 11:00:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:50:00.134 11:00:43 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:50:00.134 11:00:43 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:50:00.134 11:00:43 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:50:00.134 11:00:43 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:50:00.134 + [[ -n 1876142 ]] 00:50:00.134 + sudo kill 1876142 00:50:00.150 [Pipeline] } 00:50:00.165 [Pipeline] // stage 00:50:00.170 [Pipeline] } 00:50:00.184 [Pipeline] // timeout 00:50:00.189 [Pipeline] } 00:50:00.202 [Pipeline] // catchError 00:50:00.207 [Pipeline] } 00:50:00.221 [Pipeline] // wrap 00:50:00.226 [Pipeline] } 00:50:00.239 [Pipeline] // catchError 00:50:00.257 [Pipeline] stage 00:50:00.259 [Pipeline] { (Epilogue) 00:50:00.271 [Pipeline] catchError 00:50:00.273 [Pipeline] { 00:50:00.286 [Pipeline] echo 00:50:00.288 Cleanup processes 00:50:00.294 [Pipeline] sh 00:50:00.861 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:00.861 2347896 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:00.875 [Pipeline] sh 00:50:01.163 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:50:01.163 ++ grep -v 'sudo pgrep' 00:50:01.163 ++ awk '{print $1}' 00:50:01.163 + sudo kill -9 00:50:01.163 + true 00:50:01.176 [Pipeline] sh 00:50:01.466 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:28.056 [Pipeline] sh 00:50:28.347 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:28.607 Artifacts sizes are good 00:50:28.623 [Pipeline] archiveArtifacts 00:50:28.631 Archiving artifacts 00:50:29.058 [Pipeline] sh 00:50:29.345 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:50:29.361 [Pipeline] cleanWs 00:50:29.371 [WS-CLEANUP] Deleting project workspace... 00:50:29.372 [WS-CLEANUP] Deferred wipeout is used... 00:50:29.386 [WS-CLEANUP] done 00:50:29.388 [Pipeline] } 00:50:29.437 [Pipeline] // catchError 00:50:29.449 [Pipeline] sh 00:50:29.732 + logger -p user.info -t JENKINS-CI 00:50:29.740 [Pipeline] } 00:50:29.754 [Pipeline] // stage 00:50:29.765 [Pipeline] } 00:50:29.781 [Pipeline] // node 00:50:29.786 [Pipeline] End of Pipeline 00:50:29.823 Finished: SUCCESS